Summary contributed by Abhishek Gupta (@atg_abhishek), Founder and Principal Researcher of the Montreal AI Ethics Institute. His book Actionable AI Ethics will be published in 2021.
This piece is part of a series of paper summaries that you can also find on his website.
*Link to original paper + authors at the bottom
Overview: A comprehensive report on how people with disabilities are excluded from the design and development of AI systems. The authors situate the discussion within the context of existing research and provide concrete recommendations on how AI practitioners can do better.
Key questions
- What can we do to draw from disability activism and scholarship to ensure protections for those who are deemed “outside the norm” by AI systems?
- Given that AI shapes the world, we need to account for the implications that exclusion as mentioned above causes to people and what we can do to recognize and address that proactively.
- In the service of disabled people, how can we lean on existing legislation to fight for accountability?
- Can we learn from the work that has been done in advocating for rights and design changes in the physical world into the digital realm?
- What are ways that we can assess that the changes being made to the system are actually benefitting people?
- What are some of the accompanying changes at a systemic level in addition to technical interventions that can help with this?
Terms and concepts from disability studies
- Disabled people are heterogenous. One of the most important considerations for people so that we take into consideration not only the notions of intersectionality, but also limit applying blanket methodologies to people who might fall in the same “category” based on their expressed disability.
- We should have some opt-out mechanisms so that we don’t impose classifications on people without their consent, sometimes erroneously which can have severe implications.
- An example that articulates the problem in a very relevant fashion is how the LGBTQ community fought very hard to remove being gay as a condition from the DSM which had prior to its removal justified their mistreatment because of the enshrinement of their identity as a problem in formal documentation.
Models of disability
- Using medical definitions of disability as those falling outside of what they think to be normal bodies stands the risk of entrenching stigmatization further, and encourage exploitation of individuals.
- The social model of disability instead looks at how the environment, both built and social, leads to disability rather than being something located in the body of the individual.
- The key insight with this is that it places the onus of interventions at a systemic level rather than placing it all on the individual.
- An important consideration that the paper highlights is how people from African American, LGBTQ, women have at varying times been described as disabled which led to a lot of marginalization, thus the social model offers a lot of appropriate consideration in thinking about disability.
- Disability is also not static and can wax and wane over time even within the same body, something else that we must keep in mind when we build AI systems.
Key terms
- The paper provides definitions like non-disabled that help to recenter the conversation in a way that is empowering rather than using phraseology like able-bodied which marginalizes the concerns of the disabled community.
- One thing that particularly caught my attention was the reframing of the phrase assistive technology describing how all technologies are meant to assist us but framing those used by disabled people as such gives a ring of paternalism and advances a technological fix rather than thinking about things like community education, support, and social change.
- This is again reflected in the fact that we simply can’t add disability as another axis to consider in the bias discussion but instead should consider the lived experiences so that the terms used and how they are represented in the system is adequately represented in the system.
- The above will also aid in the appropriate emphasis on the non-technical measures that are required in addition to the use of technology to meets the need of these communities.
Discrepancies in development and deployment
- Another consideration when thinking about technology as a vector for change is that access to technology is highly stratified meaning there is inequity in access both because of financial and distribution reasons.
- There are also problems when this is used as a pretense for developing solutions that might seek to initially meet the needs of the community, but upon new-found success in the wider market, those are abandoned in the interest of pursuing bigger markets and profits.
- It is also wrong when the community is used as a testbed for new technologies, to iron out the kinks before rolling it out for wider use and ignoring the adequate consideration and participation of the people from the community.
- A lot of ethical decisions might already have been made in terms of the limits that the technology will impose on its user, thus stripping agency from the users.
Design considerations
- For example when thinking about transcription services using AI, they are meant to be standardized and implement broad-based gestures and vocabulary but when this is done by humans, they often tailor their communication in a way to be more personal and connected to the individual.
- This personalization might be lost through the interjection of automated systems into such fields.
Biases at the intersection of AI and disability
- People with disabilities are affected non-uniformly across identity groups when it comes to the AI-bias debate and it is problematic that most current discussions on AI bias don’t take this into account.
- For example, in content moderation practices, the paper points to the example of how content with terms about disability is marked as toxic more frequently than those without. This has severe implications in the ability of people to freely express themselves, gather and discuss issues online.
- When people from the disabled community are excluded, there is a severe risk of misunderstanding and misrepresenting the issues to the point that they create more harm than good when making decisions on how to address bias in the AI systems.
- An example that potentially highlights this problem is how the 2018 Arizona Uber incident failed to take into account the pedestrian that the car struck, partially because it was confused since she had a bicycle with her. This might imply that people on wheelchairs or scooters might also fail to be adequately recognized and run into more accidents with self-driving vehicles around.
“Indeed, the way in which “disability” resists fitting into neat arrangements points to bigger questions about how other identity categories, such as race, sexual orientation, and gender, are (mis)treated as essential, fixed classifications in the logics of AI systems, and in much of the research examining AI and bias.”
- Another particularly poignant point made in the paper is that disability is more than the physical and mental posture of the individual and more so as to how society responds to it.
What does normal mean to AI?
- We learned above that at varying times, different groups have been designated as disabled leading to unfortunate consequences.
- But, in the case of automated systems, the harms are easy to magnify.
- Especially when the systems are making significant decisions about someone’s life, we don’t want to have rigid, faulty categories that can jeopardize the safety of individuals.
- An example from the Deaf community mentions how instead of using technology to bring them over to the hearing world, they believe the failure to be the unwillingness of people to learn sign language in communicating them as the problem.
- With invigilation systems relying on emotion and face recognition, especially under the pandemic of 2020, there are visceral risks to the ability of people to participate in activities because of different notions of normal within the system.
Reverse Turing Tests causing harm
- A reverse Turing Test is one where we are asked to prove our humanity to the machine, often for security purposes.
- But, it is most of the time looking for a specific kind of human, one that falls within its definition of normal. What this means is that it ignores the potential that people with different conditions might be slow to click on things, or might have speech differences which might flag them as anomalies unnecessarily.
- While not a Turing Test, Amazon in its warehouses utilizes monitoring software that is meant to extract as much labor as possible from its workers. This is made worse by the fact that it leads to injuries and places an even greater burden on those with disabilities.
- Sometimes, technology running in the background such as mouse movements and click actions on a webpage can be surreptitiously used to infer whether someone has a disability, certainly something that is not only nefarious but done without consent from the users visiting that webpage.
Can additional data help?
- According to the examples in the paper, this might only serve to reinforce the normative model at the core of the AI system which might further exacerbate the problem.
- Even when more data is included, there is a problem that not only does it not adequately represent people but that it also intrudes of privacy through unnecessary surveillance in the interest of capturing more data.
- More so, the risk is then pushed onto the historically marginalized groups while the benefits accrue to already powerful actors that stand to make a profit from the deployment of such systems.
- Often those who have rare conditions can’t be sufficiently protected when their data is collected in the interests of making more inclusive AI systems.
- Additionally, there is no guarantee either that such data will be kept out of the hands of insurance companies or other actors who can stand to benefit from this information by differentially charging those people.
- As is commonplace in the AI world, in collecting large-scale data, clickworkers are often employed to label data and they might be provided with little or no guidance as to what disability means and they might erroneously label data in way that reinforces the normative model and further entrenches discrepancies in how people with disabilities are addressed by AI systems.
- Considerations about the regional disparities in this discussion, as I’ve pointed out with my co-author Victoria in an MIT Technology Review article, are important to consider. In an example about the differences between how autism might be expressed and perceived, differences in children in Bangladesh vs. America created differential results skewing the kind of support that was provided.
Work and Disability in the context of AI
- In systems that are used to automatically scrape data and make decisions about candidates in hiring, even when companies manufacturing these systems claim that they are able to adequately able to address biases on the disability front, there is no guarantee that if such a determination is made by the system and provided to the employer making the decision, harm won’t be avoided.
- In fact, it exacerbates the problem by giving the decision-makers even more data (perhaps surfacing an unexpressed disability) that can be used to discriminate against individuals while simultaneously stripping away their ability to bring suits against the employers in case they are discriminated against because of the opacity of the system.
- This weakens the protections offered by the ADA in the United States as an example.
- Some companies who receive subsidies from the government to employ people with disabilities might use tactics like compensating people with gift cards instead of money creating unequal working conditions and structures aggravating harm in the workplace.
Are there accountability measures?
The credo of this community is “Nothing about us without us”
- The above credo is importantly expressed in grant proposals as an example, but often organizations like the NSF don’t follow-up on whether that has been adhered to after the funding has been provided.
- Without accountability and follow-up, we risk creating a false sense of comfort when the real harms continue to remain unmitigated in the field.
Other ethical concerns
- People might also forget that when using such technological fixes, we create additional concerns such as the compromising of bystander privacy, say for example, when a vision system is used to aid someone with visual impairments.
- There is also a locking in of corporate interests in creating a dependence on such systems when they close them off to scrutiny and modification that might limit the ability of people to fix them if they are broken or adapt them to better meet their needs.
Key challenges
- Given how these technologies are built in a very proprietary manner, it is hard at the moment to see how we can move from mere inclusion to agency and empowerment of individuals.
- Especially pointing to the case of the civil rights movement, the paper concludes on a powerful note mentioning that we lose when we let others speak for us.
Conclusion
The paper offers many rarely (for the AI community) discussed facets of what the experience of those with disabilities is when they interact with AI systems. It situates the concerns in the wider discussion of the disability rights movements and years of scholarship and activism in the space. It also provides some guidance on how people designing and developing these systems can do better when it comes to better meeting the needs of those with disabilities.
What does this mean for Actionable AI Ethics?
- As we build AI systems, keeping ethics concerns should extend beyond the familiar concerns of racial and gender bias to consider intersectionality with other aspects which are traditionally excluded, like disability which raise unmitigated concerns even when traditional vectors are addressed.
- Even when technical systems are built with the explicit needs of people with disabilities, this doesn’t mean that ethics concerns like bias and privacy are automatically managed. It still requires deliberation and careful consideration, especially active efforts to include them in the design and development process.
Questions that I am exploring
- What have been barriers that have prevented AI practitioners from centring those with disabilities in system design and development?
- How do we better educate AI practitioners to learn from the work of the disability community to build products and services that respect the concerns raised in this paper?
Potential further reading
- Artificial intelligence and disability: too much promise, yet too little substance?
- Chaos theory and artificial intelligence may provide insights on disability outcomes
Original paper by Meredith Whittaker, Meryl Alper, Cynthia L. Bennett, Sara Hendren, Liz Kaziunas, Mara Mills, Meredith Ringel Morris, Joy Rankin, Emily Rogers, Marcel Salas, Sarah Myers West: https://ainowinstitute.org/disabilitybiasai-2019.pdf