Summary contributed by Alexandrine Royer, administrative coordinator at The Foundation for Genocide Education.
*Authors of full paper & link at the bottom
Mini-summary: Although it may not always seem evident, the development of AI technologies is a part of the patterns of power that characterize our intellectual, political, economic and social worlds. Recognizing and identifying these patterns is essential to ensure that those at the bottom of society are not disproportionately affected by the adverse effects of technological innovation. To protect and prevent harm against vulnerable groups, the authors recommend adopting a decolonial critical approach in AI to gain better foresight and ethical judgement towards advances in the field. They offer three tactics that can lead to the creation of decolonial artificial intelligence: creating a technical practice for AI, seeking reverse tutelage and reverse pedagogies and the renewal of affective and political communities.
Full summary:
After repeated cases of algorithms gone awry, AI’s potential, and the possibilities for its misuse, has come under the scrutiny of governments, industries and members of civil society. The authors make clear that when evaluating the aims and applications of AI, we often fail to recognize and question the asymmetrical power dynamics that underlie both the technology and the systems of networks and institutions to which it is linked. Our failure to acknowledge these power relations undermines our ability to identify and prevent future harms arising out of these systems. Traditional ethical standards for human-subject research in the sciences often do not consider structural inequities, such as systematic racism. This is particularly alarming as AI can ingest, perpetuate and legitimize inequalities to a scale and scope no technology has ever done before. As follows, we must work to rethink our tools for the evaluation and creation of socially beneficial technologies.
For Mohamed, Png & Isaac, one way forward is to adopt a critical science-based approach, grounded in decolonial theory, to unmask the values, cultures and powers dynamics at play between stakeholders and AI technologies. Critical science and decolonial theory, when used in combination, can pinpoint the limitations of an AI system and its potential ethical and social ramifications. These approaches offer a “sociotechnical foresight tool” for the development of ethical AI.
AI, like all technologies, did not emerge out of an ahistorical and isolated scientific bubble. The power dynamics between the world’s advantaged and disadvantaged, instilled during the colonial era, continue to resurface in the contemporary design, development and use of AI technologies. The authors point to numerous instances where colonial practices of oppression, exploitation and dispossession are present in AI systems. They refer to these cases as algorithmic coloniality.
The examples brought up by the authors touch on the use of AI systems, its labour market, and its testing locations. The authors point to the biases against certain groups in algorithmic decision-making systems in US law enforcement. They refer to the unethical working conditions of ghost workers who do data labelling and annotation. The beta-testing and fine-tuning of AI systems are also part of a phenomenon known as “ethics dumping”, whereby AI developers purposely test and deploy their technologies in countries with weaker data protection laws. Finally, the geopolitical imbalance in AI governance policies and the paternalism of technological-focused international social development projects to patterns of global dependency. All in all, it is made evident that AI is both shaped and supported by colonial structures of power.
Adopting a decolonial framework would allow for an analysis of AI technologies within a socio-political and global context and help address these types of abuses. Recognizing the bigger context can contribute to the design of a more inclusive and well-adapted mechanism of oversight for AI systems. The authors list three tactics for future decolonial AI design, being a critical technical practice of AI, the establishment of reciprocal engagements and reverse pedagogies, and the renewal of affective and political communities.
Critical technical practices are a middle-ground between the technical work of developing new AI algorithms and the critical work of questioning taken-for-granted assumptions. The topics of AI fairness, safety, equity and decision-making and resistance all aim to create more context-aware technological development. Reciprocal engagements and reverse pedagogies address the possibilities of knowledge exchange between AI researchers and stakeholders. They can take place through the form of intercultural dialogue, data documentation and meaningful community-engaged research design. The renewal of affective and political communities refers to the creation of new types of solidarity-based communities that have the power to address, contest and redress emerging challenges in tech.
As AI will have far-reaching impacts across the social strata, a multiplicity of intellectual perspectives must be a part of its development and application. Critical science and decolonial theory, along with its associated tactics, are useful tools for identifying and predicting the more nefarious uses of AI. Historical hindsight is always beneficial to technological foresight. The challenge remains of finding concrete avenues for marginalized groups to have a real influence in the decision-making process. Those who have the most to lose from AI systems are often readily aware of the social inequities they live and face. While it is sometimes tempting to get lost into semantics and academic jargon when discussing issues in AI ethics, we must focus our efforts on how to make AI development, and mechanisms for its criticism, truly legible and accessible to all members of society. It includes rendering the decolonial theory and its associated concepts comprehensible to AI developers and members of the tech industry.
Original paper by Shakir Mohamed, Marie-Therese Png, William Isaac: https://arxiv.org/pdf/2007.04068.pdf