• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Trustworthiness of Artificial Intelligence

November 11, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Sonali Jain, Shagun Sharma, Manan Luthra, Mehtab Fatima]


Overview: If you are new to the space of AI Ethics, this is the paper for you. Offering a wide coverage of the issues that enter into the debate, AI governance and how we build trustworthy AI are explored by the authors.


Introduction

One of the strengths of this paper is how it proves a productive introduction for those who are new to the AI Ethics space. Touching upon governance (as we have done), how we create trustworthy AI is explored. What we mean by ā€˜trustworthy’ is open for review, but some aspects must enter the debate. Three of these are highlighted below.

Key Insights

The authors appeal to how AI should be compliant in the following 3 ways:

1. Lawful: The AI system should be compliant with various rules and laws. 

2. Ethical: It should contain morals and ethics and adhere to moral values and principles. 

3. Robust: AI should be sturdy in both social and technical sense.

How AI can be made lawful: A rights approach to AI

The benefit of such an approach is its ability to put humanity at the centre of AI considerations while maintaining respect for human dignity. One example of how this works is the right to freedom from coercion. Focused on preventing manipulation, laws such as the California Law try to make sure that ā€œAI systems must not in any case dominate, force, deceive or manipulate human beingsā€ (p.g. 908).

The approach becomes even more intriguing when applied to harm. Often, AI systems are said to be designed not to harm humans. While being an intuitive claim, such an approach does require the AI to be aware of humans alongside the context in which it finds itself. 

Furthermore, the depth of awareness required depends on which AI system you’re talking about. You can imagine that the AI used in CV screening does not need to have an acute sense of other humans compared to facial recognition (especially at Facebook).

However, a rights-based approach can’t do it all on its own.

Ethical principles in the AI space

The importance of privacy, explainability and transparency were rightly explored here, staple products in building trustworthy AI. However, what jumped out at me was how the authors did not advocate for complete transparency. Instead, transparency is to be pursued in the name of fueling explainability, but some information should only be accessible to those in the appropriate positions.

Nevertheless, those in these positions should be both interdisciplinary and diverse.

The importance of universal design

Given AI’s wide-reaching effects, the design should be accessible to all genders, ages and ethnicities. This comes from designing the AI with diversity already in the team, a token of its all-encompassing nature. Furthermore, the ā€˜common AI fight’ is shown in the paper’s methods for trustworthy AI involving cross-business and cross-sector collaboration. With AI’s impact being both mental and physical, the AI space needs all the collaboration it can get.

Between the lines

While a good introduction into the AI space, I would’ve liked a deeper exploration into the practical side of these approaches. For example, how human intervention in AI processes can be beneficial, rather than having it assumed to be so. Nevertheless, should any human intervention have a chance of success, the correct education would be required. Here, I liked how the paper mentioned AI’s potential call for the educational system to be more job orientated and reflect the state of the world it will be creating. While this may not be the actuality, it will soon convert into a necessity.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Writer-Defined AI Personas for On-Demand Feedback Generation

    Writer-Defined AI Personas for On-Demand Feedback Generation

  • Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

    Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

  • Building a Credible Case for Safety: Waymo's Approach for the Determination of Absence of Unreasonab...

    Building a Credible Case for Safety: Waymo's Approach for the Determination of Absence of Unreasonab...

  • Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

    Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

    LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

  • The State of AI Ethics Report (June 2020)

    The State of AI Ethics Report (June 2020)

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • Rethink reporting of evaluation results in AI

    Rethink reporting of evaluation results in AI

  • Discursive framing and organizational venues: mechanisms of artificial intelligence policy adoption

    Discursive framing and organizational venues: mechanisms of artificial intelligence policy adoption

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.