• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Designing for Human Rights in AI

August 26, 2020

Summary contributed by Jana Thompson — AI Designer and board member at Feminist.AI

*Authors of original paper & link at the bottom


Mini-summary: Given the pervasive use of AI in so many spheres of life – healthcare, finance, education, criminal justice – the authors of this paper argue for a design-first approach rather than focusing on the algorithms to potential AI products and services to address these issues that can impact people’s basic human rights. Using the well-established frameworks and methodologies of Design for Values, Value Sensitive Design, and Participatory Design, along with the EU Charter of Fundamental Human Rights as a basis of what human rights to address, the authors propose a process to bridge the gap between the purely technical work of AI and its social impact.

Full summary:

In contrast to approaches for ethical AI, many frameworks and papers for developing ethical AI focus on documentation, process, and algorithm, rather than integrating existing design processes into it. Designers have long considered how to create by considering ethical implications and social impact, developing such methodologies as Value Sensitive Design, Values in Design, and Participatory Design. By shifting the conversation to and centering the user experience, those creating AI in academic and industry spaces can shift their focus from the merely technical to what the authors of the paper term the socio–technical.

Value specification is the translation of abstract values to design requirements. Those can be mapped with a values hierarchy as shown below. 

A picture containing drawing

Description automatically generated
Figure 1. Values hierarchy (a) visually maps the context-dependent specification of values into norms and socio-technical design requirements. For instance, the value privacy (b) may in a certain context be interpreted by stakeholders as the ability of individuals to provide informed consent to the processing of their personal data, its confidentiality, and the ability to erase personal data from the system. In turn, each of these properties is linked to a design requirement to be implemented.

The design process the authors propose is based on the methodology of Value Sensitive Design. This methodology integrates three types of investigations: 

  • conceptual investigations to identify the stakeholders and values implicated by the technological artifact in the considered context and define value specifications
  • empirical investigations to explore stakeholders’ needs, views, and experiences in relation to the technology and the values it implies
  • technical investigations to implement and evaluate technical solutions that support the values and norms elicited from the conceptual and empirical investigations

These investigations are done by engaging with stakeholders and by centering this work in the approaches of Participatory Design, where stakeholders are co-designers.  By treating stakeholders in this way, this allows for the people whose voices are often unheard and undervalued to be integrated and give meaning and context to the work that isn’t obvious in a purely technical approach. For example, if someone proposes to use robots in care-taking situations, many people who would be the recipients of such case would prefer the care of a human instead. The human relationship between a caretaker and human is an ongoing relationship that is important to a patient’s long-term care, something that a robot is not able to deliver.

One issue is on what determines the definition of human rights as values to use in any value hierarchy. Here, the team chooses to use the four values – human dignity, freedom, equality, and solidarity – that are the basis of the EU Charter of Fundamental Rights. The authors acknowledge that this is only as an example for this paper, and that other values apply in different situations. For more details on these rights see the highlighted sections within the paper on the Dignity, Freedom, Equality and Solidarity Titles of the EU Charter.

One key point the authors make is with the problem of data determinism: 

“A situation in which the data collected about individuals and the statistical inferences drawn from that data are used to judge people on the basis of what the algorithm says the might do/be/become, rather than what the person actually has done and intends to do.” [from Broeders, et all. 2017]

Algorithms in policing and recidivism have been built based on historical data that claims to be an objective basis for making decisions. However, data from societies with historic and ongoing systemic bias against minorities belies the supposed objectivity of the quantitative data used in these algorithms.

With iterative investigations, third-order values as seen in the diagram below can be deduced. Third and higher-order values as well as their translation into system properties are strongly context-dependent. These investigations must involve as broad a population as those who would be impacted by the technology as is possible for an outcome that is equitable as possible. This process is analogous to the principles behind agile software development and this key insight can build a bridge between software developers and designers.

A screenshot of a cell phone

Description automatically generated
Figure 2. Human rights as top-level requirements in the design roadmap. The dashed lines depict for the sake of relations between individual values. Further specification of values into context-dependent design requirements is achieved through stakeholder engagement, iterating through conceptual, empirical, and technical investigations. 

In some cases, the outcome of the research process can be that participants decide that none of the technology is necessary as a solution. The solutionism trap as defined by Morozov (2013) and expounded further by Selbst, et al. (2019) is that those in charge of a project will assume that the form of the solution must always be an AI. Sometimes, an AI can be harmful or simply unnecessary and stakeholders must accept that as a possibility going forward.

The roadmap in the paper leaves many questions, such as who the relevant stakeholders in a project are, who determines an appropriate stopping point, and if there is a need for an external organization to certify the integrity of the design process and to ensure its transparency. Some questions can be examined as part of the research process in past uses of Design for values in domains where AI is now being applied. By basing their methodology on existing design frameworks, this current framework can advance due diligence consistent with the United Nations Guiding Principles (UNGP) on Business and Human Rights. Furthermore, the authors believe that while the ethical development of AI systems it the responsibility of the company or organization, there is a strong role for government oversight. Finally, designing for human rights is not an impediment for innovation, but a necessary step for achieving human and AI interactions consistent to moral and social values embodied by human rights. 

Papers referenced: 

Broeders, D, Schrijivers E and Hirsch Ballin E (2017) Big Data and Security Policies: Serving Security, Protecting Freedom. Available at: https://english.wrr.nl/binaries/wrr-eng/documents/policy-briefs/2017/01/31/big-data-and-security-policies-serving-security-protecting-freedom/WRR_P86_BigDataAndSecurityPolicies.pdf

Morozov E (2013) To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems That Don’t Exist. Penguin UK.
Selbst AD, Boyd D and Friedler SA, et al. (2019) Fairness and Abstraction in Sociotechnical Systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency – FAT* ’19, New York, NY, 2019, pp. 59-68. DOI: 10.1145/3287560.3287598


Original paper by Evgeni Alzenberg, Jeroen van den Hoven: https://arxiv.org/pdf/2005.04949.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

    Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

  • The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

    The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

  • Open and Linked Data Model for Carbon Footprint Scenarios

    Open and Linked Data Model for Carbon Footprint Scenarios

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

  • Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms

    Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms

  • Scientists' Perspectives on the Potential for Generative AI in their Fields

    Scientists' Perspectives on the Potential for Generative AI in their Fields

  • Research summary: Algorithmic Injustices towards a Relational Ethics

    Research summary: Algorithmic Injustices towards a Relational Ethics

  • The Confidence-Competence Gap in Large Language Models: A Cognitive Study

    The Confidence-Competence Gap in Large Language Models: A Cognitive Study

  • How Culturally Aligned are Large Language Models?

    How Culturally Aligned are Large Language Models?

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.