• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Trustworthiness of Artificial Intelligence

November 11, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Sonali Jain, Shagun Sharma, Manan Luthra, Mehtab Fatima]


Overview: If you are new to the space of AI Ethics, this is the paper for you. Offering a wide coverage of the issues that enter into the debate, AI governance and how we build trustworthy AI are explored by the authors.


Introduction

One of the strengths of this paper is how it proves a productive introduction for those who are new to the AI Ethics space. Touching upon governance (as we have done), how we create trustworthy AI is explored. What we mean by ‘trustworthy’ is open for review, but some aspects must enter the debate. Three of these are highlighted below.

Key Insights

The authors appeal to how AI should be compliant in the following 3 ways:

1. Lawful: The AI system should be compliant with various rules and laws. 

2. Ethical: It should contain morals and ethics and adhere to moral values and principles. 

3. Robust: AI should be sturdy in both social and technical sense.

How AI can be made lawful: A rights approach to AI

The benefit of such an approach is its ability to put humanity at the centre of AI considerations while maintaining respect for human dignity. One example of how this works is the right to freedom from coercion. Focused on preventing manipulation, laws such as the California Law try to make sure that “AI systems must not in any case dominate, force, deceive or manipulate human beings” (p.g. 908).

The approach becomes even more intriguing when applied to harm. Often, AI systems are said to be designed not to harm humans. While being an intuitive claim, such an approach does require the AI to be aware of humans alongside the context in which it finds itself. 

Furthermore, the depth of awareness required depends on which AI system you’re talking about. You can imagine that the AI used in CV screening does not need to have an acute sense of other humans compared to facial recognition (especially at Facebook).

However, a rights-based approach can’t do it all on its own.

Ethical principles in the AI space

The importance of privacy, explainability and transparency were rightly explored here, staple products in building trustworthy AI. However, what jumped out at me was how the authors did not advocate for complete transparency. Instead, transparency is to be pursued in the name of fueling explainability, but some information should only be accessible to those in the appropriate positions.

Nevertheless, those in these positions should be both interdisciplinary and diverse.

The importance of universal design

Given AI’s wide-reaching effects, the design should be accessible to all genders, ages and ethnicities. This comes from designing the AI with diversity already in the team, a token of its all-encompassing nature. Furthermore, the ‘common AI fight’ is shown in the paper’s methods for trustworthy AI involving cross-business and cross-sector collaboration. With AI’s impact being both mental and physical, the AI space needs all the collaboration it can get.

Between the lines

While a good introduction into the AI space, I would’ve liked a deeper exploration into the practical side of these approaches. For example, how human intervention in AI processes can be beneficial, rather than having it assumed to be so. Nevertheless, should any human intervention have a chance of success, the correct education would be required. Here, I liked how the paper mentioned AI’s potential call for the educational system to be more job orientated and reflect the state of the world it will be creating. While this may not be the actuality, it will soon convert into a necessity.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Algorithms Deciding the Future of Legal Decisions

    Algorithms Deciding the Future of Legal Decisions

  • Assessing the nature of large language models: A caution against anthropocentrism

    Assessing the nature of large language models: A caution against anthropocentrism

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

    The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

  • Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

    Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

  • The social dilemma in artificial intelligence development and why we have to solve it

    The social dilemma in artificial intelligence development and why we have to solve it

  • Research Summary: Trust and Transparency in Contact Tracing Applications

    Research Summary: Trust and Transparency in Contact Tracing Applications

  • Rise of the machines: Prof Stuart Russell on the promises and perils of AI

    Rise of the machines: Prof Stuart Russell on the promises and perils of AI

  • Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

    Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

  • The Role of Arts in Shaping AI Ethics

    The Role of Arts in Shaping AI Ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.