• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Designing for Human Rights in AI

August 26, 2020

Summary contributed by Jana Thompson — AI Designer and board member at Feminist.AI

*Authors of original paper & link at the bottom


Mini-summary: Given the pervasive use of AI in so many spheres of life – healthcare, finance, education, criminal justice – the authors of this paper argue for a design-first approach rather than focusing on the algorithms to potential AI products and services to address these issues that can impact people’s basic human rights. Using the well-established frameworks and methodologies of Design for Values, Value Sensitive Design, and Participatory Design, along with the EU Charter of Fundamental Human Rights as a basis of what human rights to address, the authors propose a process to bridge the gap between the purely technical work of AI and its social impact.

Full summary:

In contrast to approaches for ethical AI, many frameworks and papers for developing ethical AI focus on documentation, process, and algorithm, rather than integrating existing design processes into it. Designers have long considered how to create by considering ethical implications and social impact, developing such methodologies as Value Sensitive Design, Values in Design, and Participatory Design. By shifting the conversation to and centering the user experience, those creating AI in academic and industry spaces can shift their focus from the merely technical to what the authors of the paper term the socio–technical.

Value specification is the translation of abstract values to design requirements. Those can be mapped with a values hierarchy as shown below. 

A picture containing drawing

Description automatically generated
Figure 1. Values hierarchy (a) visually maps the context-dependent specification of values into norms and socio-technical design requirements. For instance, the value privacy (b) may in a certain context be interpreted by stakeholders as the ability of individuals to provide informed consent to the processing of their personal data, its confidentiality, and the ability to erase personal data from the system. In turn, each of these properties is linked to a design requirement to be implemented.

The design process the authors propose is based on the methodology of Value Sensitive Design. This methodology integrates three types of investigations: 

  • conceptual investigations to identify the stakeholders and values implicated by the technological artifact in the considered context and define value specifications
  • empirical investigations to explore stakeholders’ needs, views, and experiences in relation to the technology and the values it implies
  • technical investigations to implement and evaluate technical solutions that support the values and norms elicited from the conceptual and empirical investigations

These investigations are done by engaging with stakeholders and by centering this work in the approaches of Participatory Design, where stakeholders are co-designers.  By treating stakeholders in this way, this allows for the people whose voices are often unheard and undervalued to be integrated and give meaning and context to the work that isn’t obvious in a purely technical approach. For example, if someone proposes to use robots in care-taking situations, many people who would be the recipients of such case would prefer the care of a human instead. The human relationship between a caretaker and human is an ongoing relationship that is important to a patient’s long-term care, something that a robot is not able to deliver.

One issue is on what determines the definition of human rights as values to use in any value hierarchy. Here, the team chooses to use the four values – human dignity, freedom, equality, and solidarity – that are the basis of the EU Charter of Fundamental Rights. The authors acknowledge that this is only as an example for this paper, and that other values apply in different situations. For more details on these rights see the highlighted sections within the paper on the Dignity, Freedom, Equality and Solidarity Titles of the EU Charter.

One key point the authors make is with the problem of data determinism: 

“A situation in which the data collected about individuals and the statistical inferences drawn from that data are used to judge people on the basis of what the algorithm says the might do/be/become, rather than what the person actually has done and intends to do.” [from Broeders, et all. 2017]

Algorithms in policing and recidivism have been built based on historical data that claims to be an objective basis for making decisions. However, data from societies with historic and ongoing systemic bias against minorities belies the supposed objectivity of the quantitative data used in these algorithms.

With iterative investigations, third-order values as seen in the diagram below can be deduced. Third and higher-order values as well as their translation into system properties are strongly context-dependent. These investigations must involve as broad a population as those who would be impacted by the technology as is possible for an outcome that is equitable as possible. This process is analogous to the principles behind agile software development and this key insight can build a bridge between software developers and designers.

A screenshot of a cell phone

Description automatically generated
Figure 2. Human rights as top-level requirements in the design roadmap. The dashed lines depict for the sake of relations between individual values. Further specification of values into context-dependent design requirements is achieved through stakeholder engagement, iterating through conceptual, empirical, and technical investigations. 

In some cases, the outcome of the research process can be that participants decide that none of the technology is necessary as a solution. The solutionism trap as defined by Morozov (2013) and expounded further by Selbst, et al. (2019) is that those in charge of a project will assume that the form of the solution must always be an AI. Sometimes, an AI can be harmful or simply unnecessary and stakeholders must accept that as a possibility going forward.

The roadmap in the paper leaves many questions, such as who the relevant stakeholders in a project are, who determines an appropriate stopping point, and if there is a need for an external organization to certify the integrity of the design process and to ensure its transparency. Some questions can be examined as part of the research process in past uses of Design for values in domains where AI is now being applied. By basing their methodology on existing design frameworks, this current framework can advance due diligence consistent with the United Nations Guiding Principles (UNGP) on Business and Human Rights. Furthermore, the authors believe that while the ethical development of AI systems it the responsibility of the company or organization, there is a strong role for government oversight. Finally, designing for human rights is not an impediment for innovation, but a necessary step for achieving human and AI interactions consistent to moral and social values embodied by human rights. 

Papers referenced: 

Broeders, D, Schrijivers E and Hirsch Ballin E (2017) Big Data and Security Policies: Serving Security, Protecting Freedom. Available at: https://english.wrr.nl/binaries/wrr-eng/documents/policy-briefs/2017/01/31/big-data-and-security-policies-serving-security-protecting-freedom/WRR_P86_BigDataAndSecurityPolicies.pdf

Morozov E (2013) To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems That Don’t Exist. Penguin UK.
Selbst AD, Boyd D and Friedler SA, et al. (2019) Fairness and Abstraction in Sociotechnical Systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency – FAT* ’19, New York, NY, 2019, pp. 59-68. DOI: 10.1145/3287560.3287598


Original paper by Evgeni Alzenberg, Jeroen van den Hoven: https://arxiv.org/pdf/2005.04949.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • Putting collective intelligence to the enforcement of the Digital Services Act

    Putting collective intelligence to the enforcement of the Digital Services Act

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

    The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • Research summary: Classical Ethics in A/IS

    Research summary: Classical Ethics in A/IS

  • The Ethical AI Startup Ecosystem 02: Data for AI

    The Ethical AI Startup Ecosystem 02: Data for AI

  • Why civic competence in AI ethics is needed in 2021

    Why civic competence in AI ethics is needed in 2021

  • Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

    Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

  • Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

    Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

  • A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

    A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.