Summary contributed by Jana Thompson — AI Designer and board member at Feminist.AI
*Authors of original paper & link at the bottom
Mini-summary: Given the pervasive use of AI in so many spheres of life – healthcare, finance, education, criminal justice – the authors of this paper argue for a design-first approach rather than focusing on the algorithms to potential AI products and services to address these issues that can impact people’s basic human rights. Using the well-established frameworks and methodologies of Design for Values, Value Sensitive Design, and Participatory Design, along with the EU Charter of Fundamental Human Rights as a basis of what human rights to address, the authors propose a process to bridge the gap between the purely technical work of AI and its social impact.
Full summary:
In contrast to approaches for ethical AI, many frameworks and papers for developing ethical AI focus on documentation, process, and algorithm, rather than integrating existing design processes into it. Designers have long considered how to create by considering ethical implications and social impact, developing such methodologies as Value Sensitive Design, Values in Design, and Participatory Design. By shifting the conversation to and centering the user experience, those creating AI in academic and industry spaces can shift their focus from the merely technical to what the authors of the paper term the socio–technical.
Value specification is the translation of abstract values to design requirements. Those can be mapped with a values hierarchy as shown below.
The design process the authors propose is based on the methodology of Value Sensitive Design. This methodology integrates three types of investigations:Â
- conceptual investigations to identify the stakeholders and values implicated by the technological artifact in the considered context and define value specifications
- empirical investigations to explore stakeholders’ needs, views, and experiences in relation to the technology and the values it implies
- technical investigations to implement and evaluate technical solutions that support the values and norms elicited from the conceptual and empirical investigations
These investigations are done by engaging with stakeholders and by centering this work in the approaches of Participatory Design, where stakeholders are co-designers. By treating stakeholders in this way, this allows for the people whose voices are often unheard and undervalued to be integrated and give meaning and context to the work that isn’t obvious in a purely technical approach. For example, if someone proposes to use robots in care-taking situations, many people who would be the recipients of such case would prefer the care of a human instead. The human relationship between a caretaker and human is an ongoing relationship that is important to a patient’s long-term care, something that a robot is not able to deliver.
One issue is on what determines the definition of human rights as values to use in any value hierarchy. Here, the team chooses to use the four values – human dignity, freedom, equality, and solidarity – that are the basis of the EU Charter of Fundamental Rights. The authors acknowledge that this is only as an example for this paper, and that other values apply in different situations. For more details on these rights see the highlighted sections within the paper on the Dignity, Freedom, Equality and Solidarity Titles of the EU Charter.
One key point the authors make is with the problem of data determinism:
“A situation in which the data collected about individuals and the statistical inferences drawn from that data are used to judge people on the basis of what the algorithm says the might do/be/become, rather than what the person actually has done and intends to do.” [from Broeders, et all. 2017]
Algorithms in policing and recidivism have been built based on historical data that claims to be an objective basis for making decisions. However, data from societies with historic and ongoing systemic bias against minorities belies the supposed objectivity of the quantitative data used in these algorithms.
With iterative investigations, third-order values as seen in the diagram below can be deduced. Third and higher-order values as well as their translation into system properties are strongly context-dependent. These investigations must involve as broad a population as those who would be impacted by the technology as is possible for an outcome that is equitable as possible. This process is analogous to the principles behind agile software development and this key insight can build a bridge between software developers and designers.
In some cases, the outcome of the research process can be that participants decide that none of the technology is necessary as a solution. The solutionism trap as defined by Morozov (2013) and expounded further by Selbst, et al. (2019) is that those in charge of a project will assume that the form of the solution must always be an AI. Sometimes, an AI can be harmful or simply unnecessary and stakeholders must accept that as a possibility going forward.
The roadmap in the paper leaves many questions, such as who the relevant stakeholders in a project are, who determines an appropriate stopping point, and if there is a need for an external organization to certify the integrity of the design process and to ensure its transparency. Some questions can be examined as part of the research process in past uses of Design for values in domains where AI is now being applied. By basing their methodology on existing design frameworks, this current framework can advance due diligence consistent with the United Nations Guiding Principles (UNGP) on Business and Human Rights. Furthermore, the authors believe that while the ethical development of AI systems it the responsibility of the company or organization, there is a strong role for government oversight. Finally, designing for human rights is not an impediment for innovation, but a necessary step for achieving human and AI interactions consistent to moral and social values embodied by human rights.Â
Papers referenced:Â
Broeders, D, Schrijivers E and Hirsch Ballin E (2017) Big Data and Security Policies: Serving Security, Protecting Freedom. Available at: https://english.wrr.nl/binaries/wrr-eng/documents/policy-briefs/2017/01/31/big-data-and-security-policies-serving-security-protecting-freedom/WRR_P86_BigDataAndSecurityPolicies.pdf
Morozov E (2013) To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems That Don’t Exist. Penguin UK.
Selbst AD, Boyd D and Friedler SA, et al. (2019) Fairness and Abstraction in Sociotechnical Systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency – FAT* ’19, New York, NY, 2019, pp. 59-68. DOI: 10.1145/3287560.3287598
Original paper by Evgeni Alzenberg, Jeroen van den Hoven: https://arxiv.org/pdf/2005.04949.pdf