• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

June 2, 2022

馃敩 Research Summary by Christina Catenacci, BA, LLB, LLM, PhD, works at the intersection of electronic surveillance technologies, privacy, cybersecurity, and data ethics.

[Original paper by National Institute of Standards and Technology (NIST) contributors: Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, Patrick Hall]


Overview: While there may be several benefits associated with Artificial Intelligence (AI), there are some biases associated with AI that can lead to harmful impacts, regardless of intent. Why does this matter? Harmful outcomes create challenges for cultivating trust in AI. Current attempts to address harmful effects of AI bias remain focused on computational factors, but the currently overlooked systemic and human, as well as societal factors are also significant sources of AI bias. We need to take all forms of bias into account when creating trust in AI.


Introduction

But how can we take all forms of AI bias into account? We can expand our perspective and take a socio-technical approach when examining and addressing AI bias. By socio-technical, I mean an approach used to describe how humans interact with technology within the broader societal context. In fact, the authors point out that, while there are many practices that aim to responsibly produce AI, guidance from a broader socio-technical perspective is needed in order to manage the risks of AI bias, operationalize values, and create new norms around how AI is built and deployed. 

In this report, the authors aim to provide a first step for developing detailed socio-technical guidance to identify and manage AI bias. In this special publication, the goal is to describe the challenges of bias in AI, identify and describe the three categories of bias in AI (computational, systemic, and human), and discuss the three broad challenges for mitigating bias. Ultimately, the authors provide preliminary guidance, and indicate that NIST intends on continuing this work and creating further guidance and standards in the future.

Key Insights

Characterizing AI bias

The authors identify the following categories of AI bias:

  • Statistical and computational. These biases stem from errors that result when the sample is not representative of the population. They arise from systematic error, and can occur in the absence of prejudice, partiality, or discriminatory intent. In AI systems, these biases are present in the datasets and algorithmic processes used in the development of AI applications, and often arise when algorithms are trained on one type of data and cannot extrapolate beyond that data
  • Systemic. These biases result from procedures and practices of particular institutions that operate in ways which result in certain social groups being advantaged or favoured and others being disadvantaged or devalued. This may not be the result of any conscious prejudice or discrimination, but rather of the majority following existing rules or norms. It occurs when infrastructures for daily living are not developed using universal design principles, thereby limiting or hindering accessibility for persons belonging to certain groups
  • Human. These biases reflect systematic errors in human thought based on a limited number of heuristic principles (mental shortcuts) to simplify decision-making. These biases are often implicit and tend to relate to how an individual or group perceives information when they make decisions or complete missing or unknown information. These biases exist across the AI lifecycle. There is a wide variety of cognitive and perceptual biases that appear in all domains. 

It is also worth noting that page 8 of the report provides a useful diagram of the three main types if biases in AI, with plenty of examples. 

How might the above biases be harmful? Applications that use AI are often deployed across sectors and contexts for decision-support and decision-making鈥攚ithout involving humans. And it is not clear whether they are capable of learning and operating in accordance with our societal values. Moreover, according to the authors:

These biases can negatively impact individuals and society by amplifying and reinforcing discrimination at a speed and scale far beyond the traditional discriminatory practices that can result from implicit human

or institutional biases such as racism, sexism, ageism or ableism.

This is why it is so important to begin utilizing a socio-technical systems approach when examining AI. In this way, we may evaluate dynamic systems of bias and understand how they impact each other. We can better understand the conditions under which these biases are attenuated or amplified. It involves a reframing of AI-related factors, and taking into account the needs of individuals, groups, and society. 

Challenges and guidance

The authors set out several challenges and propose guidance relating to these challenges:

  • Dataset factors such as availability, representativeness, and baked-in societal biases
  • Issues of measurement and metrics to support testing and evaluation, validation, and verification (TEVV)
  • Human factors, including societal and historic biases within individuals and organizations, as well as challenges related to implementing human-in-the-loop

Following this, the authors provide guidance concerning governance鈥攚hy is this important? Governance processes impact nearly every aspect of managing AI bias. Looking at things holistically, the authors delve into various elements such as organizational measures and culture:

  • Monitoring. It is important to determining whether performance is different than expected once deployed
  • Recourse channels. It is necessary to have feedback channels to allow system end users to flag incorrect or potentially harmful results, and seek recourse for errors or harms
  • Policies and procedures. It is important ensure that written policies and procedures address key roles, responsibilities, and processes at all stages of the AI model lifecycle to manage and detect potential overall issues of AI system performance
  • Documentation. Clear documentation practices can help to systematically implement policies and procedures, standardizing how an organization鈥檚 bias management processes are implemented and recorded at each stage. It also  helps to ensure accountability
  • Accountability. It is critical that individuals or teams bear responsibility for risks and associated harms to ensure that there is a clear assessment of the role of the AI system itself, and provide a direct incentive for the mitigation of risks and harms
  • Culture and practice. AI governance must to be embedded throughout the culture of an organization in order to be effective 
  • Risk mitigation, risk tiering, and incentive structures. There needs to be an acknowledgement that risk mitigation, not risk avoidance, is often the most effective factor in managing such risks

Information sharing. Sharing cyber threat information helps organizations improve their cybersecurity postures, and those of other organizations

Between the lines

This report takes a broad overview of the complex challenge of addressing and managing risks associated with AI bias. In order to promote the trustworthiness of AI, the authors approach AI as a socio-technical system, acknowledging that AI systems and the associated biases extend beyond the computational level. In my view, the document rightly acknowledges that there are some factors that are contextual in nature, and they must be examined from different angles using a fresh approach.

I think that this report constitutes a useful first step in the process. In terms of next steps, it would be highly beneficial for NIST to develop further socio-technical guidance in collaboration with interdisciplinary researchers. The authors create a helpful Glossary that explains the main terms used throughout the report鈥攁 valuable tool for future discussions.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

馃攳 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Artificial Intelligence: the global landscape of ethics guidelines

    Artificial Intelligence: the global landscape of ethics guidelines

  • AI Policy Corner: The Kenya National AI Strategy

    AI Policy Corner: The Kenya National AI Strategy

  • Oppenheimer As A Timely Warning to the AI Community

    Oppenheimer As A Timely Warning to the AI Community

  • FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

  • Research summary: Principles alone cannot guarantee ethical AI

    Research summary: Principles alone cannot guarantee ethical AI

  • On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

    On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

  • Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

    Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

  • Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

    Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

    Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

  • On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

    On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.