Skip to content

Hidden Dangers of Formal Prompts (AI Secrets)

Discover the Surprising Hidden Dangers of Formal Prompts and Uncover the Secrets of AI Technology in this Must-Read Blog Post!

Step Action Novel Insight Risk Factors
1 Implement Hidden Bias Detection Hidden biases can be present in formal prompts, which can lead to algorithmic discrimination. Failure to detect hidden biases can result in unfair outcomes for certain groups.
2 Design Ethical Prompts Ethical prompt design can help prevent algorithmic discrimination and promote machine learning fairness. Failure to design ethical prompts can result in biased outcomes and harm to certain groups.
3 Ensure Transparency Requirements are Met Transparency in AI systems can help build trust and accountability. Lack of transparency can lead to suspicion and mistrust of AI systems.
4 Provide Human Oversight Assurance Human oversight can help ensure that AI systems are making fair and ethical decisions. Lack of human oversight can result in biased outcomes and harm to certain groups.
5 Mitigate Unintended Consequences Unintended consequences can arise from the use of AI systems, and it is important to mitigate these risks. Failure to mitigate unintended consequences can result in harm to individuals or society as a whole.
6 Implement Responsible AI Responsible AI implementation involves considering the potential impact of AI systems on individuals and society as a whole. Failure to implement responsible AI can result in harm to individuals or society as a whole.
7 Protect Data Privacy Protecting data privacy is crucial in AI systems to prevent harm to individuals and maintain trust in the system. Failure to protect data privacy can result in harm to individuals and loss of trust in the system.

The hidden dangers of formal prompts in AI systems can lead to algorithmic discrimination and biased outcomes. To prevent these risks, it is important to implement hidden bias detection, design ethical prompts, ensure transparency requirements are met, provide human oversight assurance, mitigate unintended consequences, implement responsible AI, and protect data privacy. These actions can help promote machine learning fairness and prevent harm to individuals or society as a whole.

Contents

  1. How Hidden Bias Detection Can Prevent Discrimination in AI Prompts
  2. Algorithmic Discrimination Prevention: Why it’s Essential for Ethical AI Prompt Design
  3. The Importance of Ethical Prompt Design in Responsible AI Implementation
  4. Ensuring Machine Learning Fairness in Formal Prompts: A Must-Have for Responsible AI
  5. Data Privacy Protection and Transparency Requirements Met: Key Considerations for Safe AI Prompts
  6. Human Oversight Assurance: How to Ensure the Safety and Ethics of Formal Prompts
  7. Unintended Consequence Mitigation in AI Prompt Design: Best Practices for Responsible Implementation
  8. The Role of Responsible AI Implementation in Avoiding Hidden Dangers of Formal Prompts
  9. From Bias Detection to Ethical Design – Steps Towards a More Responsible Approach to Formal Prompts
  10. Common Mistakes And Misconceptions

How Hidden Bias Detection Can Prevent Discrimination in AI Prompts

Step Action Novel Insight Risk Factors
1 Conduct AI prompts analysis AI prompts analysis involves examining the language and structure of prompts used in AI systems to identify potential biases. The risk of overlooking subtle biases that may be embedded in the prompts.
2 Use machine learning algorithms to detect hidden biases Machine learning algorithms can be trained to detect hidden biases in AI prompts by analyzing large datasets. The risk of relying solely on machine learning algorithms without human oversight, which can lead to unintended consequences.
3 Consider ethical considerations in AI Ethical considerations in AI include ensuring that AI systems are designed to promote fairness, transparency, and accountability. The risk of overlooking ethical considerations, which can lead to discrimination and other negative outcomes.
4 Use data-driven decision making Data-driven decision making involves using data to inform decisions about AI prompts and other aspects of AI development. The risk of relying on incomplete or biased data, which can lead to inaccurate or unfair decisions.
5 Apply algorithmic fairness standards Algorithmic fairness standards can be used to ensure that AI systems are designed to promote fairness and prevent discrimination. The risk of overlooking algorithmic fairness standards, which can lead to biased AI systems.
6 Mitigate bias using strategies such as diversity and inclusion initiatives Bias mitigation strategies can include initiatives to promote diversity and inclusion in AI development teams. The risk of overlooking bias mitigation strategies, which can lead to biased AI systems.
7 Incorporate human oversight in AI development Human oversight can help to ensure that AI systems are designed to promote fairness and prevent discrimination. The risk of relying solely on AI systems without human oversight, which can lead to unintended consequences.
8 Use fairness metrics for AI models Fairness metrics can be used to evaluate the fairness of AI models and identify potential biases. The risk of overlooking fairness metrics, which can lead to biased AI systems.
9 Implement explainable artificial intelligence (XAI) XAI can help to increase transparency and accountability in AI systems by providing explanations for decisions made by AI models. The risk of overlooking XAI, which can lead to opaque and potentially biased AI systems.
10 Use training data selection criteria Training data selection criteria can be used to ensure that AI models are trained on diverse and representative datasets. The risk of relying on biased or incomplete training data, which can lead to biased AI models.
11 Establish ethics committees for AI governance Ethics committees can provide oversight and guidance on ethical considerations in AI development. The risk of overlooking ethical considerations, which can lead to biased and potentially harmful AI systems.
12 Conduct impact assessment of AI systems Impact assessment can help to identify potential negative consequences of AI systems and inform decisions about AI development. The risk of overlooking potential negative consequences of AI systems, which can lead to unintended harm.

Algorithmic Discrimination Prevention: Why it’s Essential for Ethical AI Prompt Design

Step Action Novel Insight Risk Factors
1 Identify protected attributes Protected attributes identification Failure to identify all relevant protected attributes can lead to unintentional discrimination
2 Preprocess data Data preprocessing methods Biases in training data can lead to biased AI models
3 Evaluate fairness metrics Fairness metrics evaluation Traditional metrics may not capture all forms of discrimination
4 Detect bias Bias detection techniques Biases can be hidden in complex AI models
5 Ensure model interpretability Model interpretability tools Lack of interpretability can make it difficult to identify and address bias
6 Use explainable AI frameworks Explainable AI frameworks Explainable AI can help identify and address bias
7 Incorporate human-in-the-loop approach Human-in-the-loop approach Human oversight can help catch and correct bias
8 Consider diversity and inclusion Diversity and inclusion considerations Failure to consider diverse perspectives can lead to unintentional discrimination
9 Analyze intersectionality Intersectionality analysis Intersectionality can reveal unique forms of discrimination
10 Prevent adversarial attacks Adversarial attacks prevention Adversarial attacks can exploit and amplify biases in AI models
11 Use privacy-preserving techniques Privacy-preserving techniques Privacy violations can disproportionately harm marginalized groups
12 Augment training data Training data augmentation Augmenting data can help address biases in training data
13 Implement fairness-aware machine learning Fairness-aware machine learning Traditional machine learning may not prioritize fairness
14 Adhere to ethics code of conduct Ethics code of conduct Failure to adhere to ethical standards can harm individuals and society as a whole

Algorithmic discrimination prevention is essential for ethical AI prompt design. To achieve this, several steps must be taken. First, it is crucial to identify all relevant protected attributes, such as race, gender, and age. Failure to do so can lead to unintentional discrimination. Next, data preprocessing methods must be used to address biases in training data. Evaluating fairness metrics is also important, but traditional metrics may not capture all forms of discrimination. Detecting bias in complex AI models can be challenging, so bias detection techniques must be employed. Ensuring model interpretability is also crucial, as lack of interpretability can make it difficult to identify and address bias. Explainable AI frameworks can help with this. Incorporating a human-in-the-loop approach can also help catch and correct bias. Diversity and inclusion considerations must be taken into account, as failure to do so can lead to unintentional discrimination. Analyzing intersectionality can reveal unique forms of discrimination. Preventing adversarial attacks is also important, as these attacks can exploit and amplify biases in AI models. Privacy-preserving techniques must be used to prevent privacy violations that can disproportionately harm marginalized groups. Augmenting training data can help address biases in training data. Implementing fairness-aware machine learning is also crucial, as traditional machine learning may not prioritize fairness. Finally, adhering to an ethics code of conduct is essential, as failure to do so can harm individuals and society as a whole.

The Importance of Ethical Prompt Design in Responsible AI Implementation

Step Action Novel Insight Risk Factors
1 Incorporate ethical AI development practices into prompt design. Ethical AI development practices ensure that AI prompts are designed with fairness, inclusivity, transparency, and privacy in mind. Failure to incorporate ethical AI development practices can result in biased, discriminatory, and unfair AI prompts.
2 Create bias-free prompts by using unbiased data selection methods. Bias-free prompt creation involves using unbiased data selection methods to ensure that the data used to train AI models is representative of the population it is meant to serve. Failure to use unbiased data selection methods can result in biased AI prompts that perpetuate existing societal inequalities.
3 Ensure fairness in AI prompts by designing prompts that do not discriminate against any group. Fairness in AI prompts involves designing prompts that do not discriminate against any group based on their race, gender, age, or any other characteristic. Failure to ensure fairness in AI prompts can result in discriminatory AI prompts that harm certain groups.
4 Design inclusive prompts that consider the needs of all users. Inclusive prompt design involves considering the needs of all users, including those with disabilities or who speak different languages. Failure to design inclusive prompts can result in AI prompts that exclude certain users and perpetuate existing inequalities.
5 Develop human-centered prompts that prioritize the user experience. Human-centered prompt development involves prioritizing the user experience and designing prompts that are easy to understand and use. Failure to develop human-centered prompts can result in AI prompts that are confusing or difficult to use, leading to user frustration and distrust.
6 Ensure transparent prompt construction by providing explanations for AI decisions. Transparent prompt construction involves providing explanations for AI decisions, making it easier for users to understand how the AI arrived at its conclusion. Failure to ensure transparent prompt construction can result in AI prompts that are difficult to understand, leading to user distrust.
7 Create privacy-preserving prompts that protect user data. Privacy-preserving prompts involve designing prompts that protect user data and ensure that it is not misused or shared without consent. Failure to create privacy-preserving prompts can result in user data being misused or shared without consent, leading to privacy violations and loss of trust.
8 Develop trustworthy AI prompts that users can rely on. Trustworthy AI prompts involve designing prompts that users can rely on to provide accurate and unbiased information. Failure to develop trustworthy AI prompts can result in users losing trust in the AI system, leading to decreased usage and adoption.
9 Use explainable AI prompts that provide clear explanations for AI decisions. Explainable AI prompts involve providing clear explanations for AI decisions, making it easier for users to understand how the AI arrived at its conclusion. Failure to use explainable AI prompts can result in AI prompts that are difficult to understand, leading to user distrust.
10 Consider ethical considerations in prompting for social good. Ethical considerations in prompting for social good involve considering the potential impact of AI prompts on society and ensuring that they are designed to promote social good. Failure to consider ethical considerations in prompting for social good can result in AI prompts that harm society or perpetuate existing inequalities.
11 Ensure AI accountability through ethical prompting. AI accountability through ethical prompting involves designing prompts that are accountable for their decisions and can be held responsible for any harm caused. Failure to ensure AI accountability through ethical prompting can result in AI prompts that are not held responsible for their decisions, leading to harm and loss of trust.

Ensuring Machine Learning Fairness in Formal Prompts: A Must-Have for Responsible AI

Step Action Novel Insight Risk Factors
1 Identify potential biases in the training data. Training data quality is crucial for ensuring fairness in machine learning. Incomplete or biased training data can lead to biased models.
2 Evaluate the model‘s accuracy and fairness using appropriate metrics. Evaluation metrics should include both accuracy and fairness measures. Over-reliance on accuracy metrics can lead to biased models.
3 Ensure algorithmic transparency and model interpretability. Understanding how the model makes decisions is essential for detecting and correcting biases. Lack of transparency can make it difficult to identify and correct biases.
4 Implement diversity and inclusion measures in the training data. Fair representation of diverse groups in the training data can help prevent discrimination. Insufficient representation of diverse groups can lead to biased models.
5 Incorporate ethical considerations into the model development process. Ethical considerations should be integrated into every stage of the model development process. Ignoring ethical considerations can lead to biased models and negative consequences.
6 Ensure data privacy protection throughout the model development process. Protecting individuals’ privacy is essential for responsible AI. Failure to protect data privacy can lead to legal and ethical issues.
7 Implement human oversight and intervention in the model development process. Human oversight can help detect and correct biases in the model. Over-reliance on automated processes can lead to biased models.
8 Continuously monitor and update the model to ensure fairness. Machine learning models should be regularly monitored and updated to ensure fairness. Failure to monitor and update the model can lead to biased outcomes.
9 Use bias detection techniques to identify and correct biases in the model. Bias detection techniques can help identify and correct biases in the model. Failure to use bias detection techniques can lead to biased models.
10 Prioritize responsible AI practices to ensure fairness in machine learning. Responsible AI practices should be prioritized to ensure fairness in machine learning. Failure to prioritize responsible AI practices can lead to biased models and negative consequences.

Data Privacy Protection and Transparency Requirements Met: Key Considerations for Safe AI Prompts

Step Action Novel Insight Risk Factors
1 Implement privacy by design approach Privacy by design approach involves integrating data privacy and security measures into the design and development of AI systems. Failure to implement privacy by design approach can lead to data breaches and loss of user trust.
2 Practice data minimization Data minimization involves collecting only the necessary data for the AI prompt and deleting it once it is no longer needed. Collecting excessive data can lead to privacy violations and increase the risk of data breaches.
3 Obtain user consent User consent protocols involve obtaining explicit consent from users before collecting and using their personal data. Failure to obtain user consent can lead to legal and ethical issues.
4 Mitigate fairness and bias issues Fairness and bias mitigation involves identifying and addressing any biases in the AI system to ensure fair and unbiased outcomes. Failure to mitigate fairness and bias issues can lead to discrimination and harm to certain groups.
5 Ensure explainability and interpretability Explainability and interpretability criteria involve making the AI system transparent and understandable to users. Lack of explainability and interpretability can lead to distrust and suspicion of the AI system.
6 Establish algorithmic accountability standards Algorithmic accountability standards involve establishing clear guidelines for the responsible use of AI systems. Lack of algorithmic accountability can lead to misuse and abuse of AI systems.
7 Ensure trustworthiness of AI systems Trustworthiness of AI systems involves ensuring that the AI system is reliable, accurate, and secure. Lack of trustworthiness can lead to loss of user trust and legal and ethical issues.
8 Implement data governance policies Data governance policies involve establishing clear guidelines for the collection, use, and storage of personal data. Lack of data governance policies can lead to privacy violations and legal and ethical issues.
9 Conduct risk assessment Risk assessment frameworks involve identifying and assessing potential risks associated with the AI prompt and implementing measures to mitigate those risks. Failure to conduct risk assessment can lead to unforeseen risks and harm to users.

Human Oversight Assurance: How to Ensure the Safety and Ethics of Formal Prompts

Step Action Novel Insight Risk Factors
1 Establish ethical guidelines compliance Formal prompts must adhere to ethical guidelines to ensure safety and ethics Failure to comply with ethical guidelines can lead to negative consequences for individuals and society
2 Implement human oversight Human oversight is necessary to ensure algorithmic transparency and bias detection Lack of human oversight can result in biased and unethical outcomes
3 Conduct risk assessment Risk assessment is crucial to identify potential risks and mitigate them Failure to conduct risk assessment can lead to unintended consequences and harm
4 Ensure accountability measures Accountability measures must be in place to hold individuals and organizations responsible for their actions Lack of accountability can lead to unethical behavior and harm
5 Protect data privacy Data privacy protection is essential to maintain trust and prevent misuse of personal information Failure to protect data privacy can result in breaches and harm to individuals
6 Evaluate fairness Fairness evaluation is necessary to ensure that formal prompts do not discriminate against certain groups Lack of fairness evaluation can result in biased outcomes
7 Ensure model interpretability Model interpretability is crucial to understand how formal prompts make decisions Lack of model interpretability can lead to mistrust and harm
8 Validate training data Training data validation is necessary to ensure that formal prompts are trained on unbiased and representative data Failure to validate training data can result in biased outcomes
9 Implement error correction protocols Error correction protocols must be in place to correct mistakes and prevent harm Lack of error correction protocols can result in unintended consequences and harm
10 Ensure regulatory compliance Regulatory compliance is necessary to ensure that formal prompts adhere to legal and ethical standards Failure to comply with regulations can result in legal and reputational consequences
11 Provide transparency reporting Transparency reporting is necessary to provide information about how formal prompts work and their outcomes Lack of transparency can lead to mistrust and harm.

In summary, ensuring the safety and ethics of formal prompts requires a comprehensive approach that includes ethical guidelines compliance, human oversight, risk assessment, accountability measures, data privacy protection, fairness evaluation, model interpretability, training data validation, error correction protocols, regulatory compliance, and transparency reporting. These steps are crucial to prevent harm and ensure that formal prompts are used ethically and responsibly.

Unintended Consequence Mitigation in AI Prompt Design: Best Practices for Responsible Implementation

Step Action Novel Insight Risk Factors
1 Identify ethical considerations Responsible implementation requires identifying ethical considerations that may arise from the use of AI systems. Failure to identify ethical considerations can lead to unintended consequences that may harm individuals or groups.
2 Mitigate risks through algorithmic bias prevention Mitigation strategies should be put in place to prevent algorithmic bias. This can be achieved through careful selection of training data and fairness and equity principles. Failure to prevent algorithmic bias can lead to discriminatory outcomes that may harm individuals or groups.
3 Ensure human oversight importance Human oversight is crucial in ensuring that AI systems are used responsibly. This can be achieved through transparency in AI systems and robustness testing methods. Lack of human oversight can lead to unintended consequences that may harm individuals or groups.
4 Conduct risk assessment techniques Risk assessment techniques should be used to identify potential risks associated with the use of AI systems. Failure to conduct risk assessments can lead to unintended consequences that may harm individuals or groups.
5 Protect data privacy Data privacy protection measures should be put in place to ensure that personal data is not misused or mishandled. Failure to protect data privacy can lead to breaches of personal data that may harm individuals or groups.
6 Ensure model interpretability standards Model interpretability standards should be put in place to ensure that AI systems are transparent and can be understood by humans. Lack of model interpretability can lead to unintended consequences that may harm individuals or groups.
7 Implement error correction mechanisms Error correction mechanisms should be put in place to ensure that AI systems can correct errors and learn from mistakes. Lack of error correction mechanisms can lead to unintended consequences that may harm individuals or groups.
8 Establish emergency shutdown protocols Emergency shutdown protocols should be put in place to ensure that AI systems can be shut down in case of emergencies. Lack of emergency shutdown protocols can lead to unintended consequences that may harm individuals or groups.
9 Select training data carefully Training data selection criteria should be put in place to ensure that AI systems are trained on unbiased and representative data. Failure to select training data carefully can lead to unintended consequences that may harm individuals or groups.

The Role of Responsible AI Implementation in Avoiding Hidden Dangers of Formal Prompts

Step Action Novel Insight Risk Factors
1 Incorporate algorithmic fairness into the AI system. Algorithmic fairness ensures that the AI system does not discriminate against any particular group or individual. Failure to incorporate algorithmic fairness can lead to biased decision-making and discrimination against certain groups or individuals.
2 Implement human oversight to monitor the AI system. Human oversight ensures that the AI system is functioning as intended and can intervene if necessary. Lack of human oversight can lead to unintended consequences and errors in decision-making.
3 Protect data privacy by implementing appropriate security measures. Data privacy is essential to maintain the confidentiality and integrity of sensitive information. Failure to protect data privacy can lead to data breaches and loss of trust from stakeholders.
4 Ensure machine learning interpretability to understand how the AI system makes decisions. Machine learning interpretability allows for transparency and accountability in decision-making. Lack of machine learning interpretability can lead to opaque decision-making and difficulty in identifying errors or biases.
5 Implement explainable AI (XAI) to provide clear explanations for the AI system’s decisions. XAI allows for transparency and accountability in decision-making and can build trust with stakeholders. Lack of XAI can lead to distrust and skepticism from stakeholders.
6 Establish accountability frameworks to ensure responsibility for the AI system’s actions. Accountability frameworks ensure that individuals or organizations are held responsible for the AI system’s actions. Lack of accountability frameworks can lead to a lack of responsibility and difficulty in addressing errors or biases.
7 Use risk assessment methods to identify potential risks and mitigate them. Risk assessment methods can identify potential risks and allow for proactive mitigation strategies. Failure to use risk assessment methods can lead to unforeseen risks and negative consequences.
8 Utilize ethical decision-making models to guide decision-making processes. Ethical decision-making models can ensure that decisions align with ethical principles and values. Lack of ethical decision-making models can lead to decisions that are not aligned with ethical principles and values.
9 Meet transparency requirements to provide stakeholders with information about the AI system’s operations. Transparency requirements can build trust with stakeholders and ensure accountability. Failure to meet transparency requirements can lead to distrust and skepticism from stakeholders.
10 Adhere to trustworthiness standards to ensure the AI system is reliable and trustworthy. Trustworthiness standards can ensure that the AI system is reliable and trustworthy. Failure to adhere to trustworthiness standards can lead to a lack of trust from stakeholders.
11 Establish ethical governance structures to ensure ethical decision-making and accountability. Ethical governance structures can ensure that ethical principles and values are integrated into decision-making processes. Lack of ethical governance structures can lead to a lack of accountability and difficulty in addressing errors or biases.
12 Comply with regulatory compliance measures to ensure legal compliance. Regulatory compliance measures can ensure legal compliance and avoid legal consequences. Failure to comply with regulatory compliance measures can lead to legal consequences and loss of trust from stakeholders.
13 Incorporate social responsibility principles to ensure the AI system benefits society as a whole. Social responsibility principles can ensure that the AI system benefits society as a whole and not just a particular group or individual. Failure to incorporate social responsibility principles can lead to negative consequences for society.
14 Follow cultural sensitivity guidelines to ensure the AI system is sensitive to cultural differences. Cultural sensitivity guidelines can ensure that the AI system is sensitive to cultural differences and does not discriminate against any particular group or individual. Failure to follow cultural sensitivity guidelines can lead to discrimination and negative consequences for certain groups or individuals.

From Bias Detection to Ethical Design – Steps Towards a More Responsible Approach to Formal Prompts

Step Action Novel Insight Risk Factors
1 Incorporate ethical design principles Ethical design involves considering the impact of AI systems on individuals and society as a whole. This includes ensuring algorithmic fairness, protecting data privacy, implementing transparency measures, and using human-centered design. Failure to consider ethical implications can lead to biased outcomes, discrimination, and negative societal impacts.
2 Use inclusive language and cultural sensitivity Inclusive language and cultural sensitivity are important for ensuring that AI systems do not perpetuate harmful stereotypes or exclude certain groups. This involves using gender-neutral language, avoiding offensive terms, and considering cultural differences. Failure to use inclusive language and cultural sensitivity can lead to discrimination and exclusion of certain groups.
3 Implement user consent policies User consent policies involve obtaining explicit consent from users before collecting and using their data. This includes providing clear information about how data will be used and giving users the option to opt-out. Failure to obtain user consent can lead to violations of data privacy and trust issues with users.
4 Establish accountability frameworks Accountability frameworks involve establishing clear lines of responsibility for AI systems and ensuring that there are consequences for unethical behavior. This includes creating ethics committees and implementing risk assessment techniques. Failure to establish accountability frameworks can lead to lack of transparency and trust issues with users.
5 Monitor and update machine learning models Machine learning models should be regularly monitored and updated to ensure that they are not perpetuating biases or producing harmful outcomes. This involves selecting appropriate training data and testing for bias. Failure to monitor and update machine learning models can lead to perpetuation of biases and negative societal impacts.

Common Mistakes And Misconceptions

Mistake/Misconception Correct Viewpoint
AI is inherently biased and cannot be trusted to make unbiased decisions. While it is true that AI can inherit biases from its training data, this does not mean that all AI systems are inherently biased. It is important to carefully design and test AI systems to ensure they are making fair and unbiased decisions. Additionally, ongoing monitoring and updating of the system can help mitigate any potential biases that may arise over time.
Formal prompts always lead to better decision-making than human intuition alone. While formal prompts can provide structure and consistency in decision-making processes, they should not be relied on exclusively or blindly followed without critical thinking. Human intuition and expertise should also be taken into account when making important decisions, as these factors can provide valuable insights that may not be captured by a formal prompt alone.
All hidden dangers associated with formal prompts stem from flaws in the underlying AI technology itself. While flaws in the underlying technology certainly pose a risk, there are other hidden dangers associated with formal prompts as well – such as over-reliance on the system or failure to consider alternative perspectives – which have more to do with how humans interact with the technology rather than inherent flaws within it. Therefore, it is important for organizations using formal prompts to implement appropriate governance structures and training programs for employees who will be working with them regularly in order to minimize these risks effectively.
The use of formal prompts eliminates subjectivity entirely from decision-making processes. While formal prompts aim at reducing subjectivity by providing objective criteria for decision-making processes; however, there still exists some level of subjectivity involved since someone has designed those criteria based on their own subjective understanding of what constitutes good/bad outcomes or desirable/undesirable behaviors etc., thus leading us back again towards bias issues mentioned earlier.