🔬 Research summary by Connor Wright, our Partnerships Manager.
[Original paper by Sonali Jain, Shagun Sharma, Manan Luthra, Mehtab Fatima]
Overview: If you are new to the space of AI Ethics, this is the paper for you. Offering a wide coverage of the issues that enter into the debate, AI governance and how we build trustworthy AI are explored by the authors.
One of the strengths of this paper is how it proves a productive introduction for those who are new to the AI Ethics space. Touching upon governance (as we have done), how we create trustworthy AI is explored. What we mean by ‘trustworthy’ is open for review, but some aspects must enter the debate. Three of these are highlighted below.
The authors appeal to how AI should be compliant in the following 3 ways:
1. Lawful: The AI system should be compliant with various rules and laws.
2. Ethical: It should contain morals and ethics and adhere to moral values and principles.
3. Robust: AI should be sturdy in both social and technical sense.
How AI can be made lawful: A rights approach to AI
The benefit of such an approach is its ability to put humanity at the centre of AI considerations while maintaining respect for human dignity. One example of how this works is the right to freedom from coercion. Focused on preventing manipulation, laws such as the California Law try to make sure that “AI systems must not in any case dominate, force, deceive or manipulate human beings” (p.g. 908).
The approach becomes even more intriguing when applied to harm. Often, AI systems are said to be designed not to harm humans. While being an intuitive claim, such an approach does require the AI to be aware of humans alongside the context in which it finds itself.
Furthermore, the depth of awareness required depends on which AI system you’re talking about. You can imagine that the AI used in CV screening does not need to have an acute sense of other humans compared to facial recognition (especially at Facebook).
However, a rights-based approach can’t do it all on its own.
Ethical principles in the AI space
The importance of privacy, explainability and transparency were rightly explored here, staple products in building trustworthy AI. However, what jumped out at me was how the authors did not advocate for complete transparency. Instead, transparency is to be pursued in the name of fueling explainability, but some information should only be accessible to those in the appropriate positions.
Nevertheless, those in these positions should be both interdisciplinary and diverse.
The importance of universal design
Given AI’s wide-reaching effects, the design should be accessible to all genders, ages and ethnicities. This comes from designing the AI with diversity already in the team, a token of its all-encompassing nature. Furthermore, the ‘common AI fight’ is shown in the paper’s methods for trustworthy AI involving cross-business and cross-sector collaboration. With AI’s impact being both mental and physical, the AI space needs all the collaboration it can get.
Between the lines
While a good introduction into the AI space, I would’ve liked a deeper exploration into the practical side of these approaches. For example, how human intervention in AI processes can be beneficial, rather than having it assumed to be so. Nevertheless, should any human intervention have a chance of success, the correct education would be required. Here, I liked how the paper mentioned AI’s potential call for the educational system to be more job orientated and reflect the state of the world it will be creating. While this may not be the actuality, it will soon convert into a necessity.