• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Trustworthiness of Artificial Intelligence

November 11, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Sonali Jain, Shagun Sharma, Manan Luthra, Mehtab Fatima]


Overview: If you are new to the space of AI Ethics, this is the paper for you. Offering a wide coverage of the issues that enter into the debate, AI governance and how we build trustworthy AI are explored by the authors.


Introduction

One of the strengths of this paper is how it proves a productive introduction for those who are new to the AI Ethics space. Touching upon governance (as we have done), how we create trustworthy AI is explored. What we mean by ‘trustworthy’ is open for review, but some aspects must enter the debate. Three of these are highlighted below.

Key Insights

The authors appeal to how AI should be compliant in the following 3 ways:

1. Lawful: The AI system should be compliant with various rules and laws. 

2. Ethical: It should contain morals and ethics and adhere to moral values and principles. 

3. Robust: AI should be sturdy in both social and technical sense.

How AI can be made lawful: A rights approach to AI

The benefit of such an approach is its ability to put humanity at the centre of AI considerations while maintaining respect for human dignity. One example of how this works is the right to freedom from coercion. Focused on preventing manipulation, laws such as the California Law try to make sure that “AI systems must not in any case dominate, force, deceive or manipulate human beings” (p.g. 908).

The approach becomes even more intriguing when applied to harm. Often, AI systems are said to be designed not to harm humans. While being an intuitive claim, such an approach does require the AI to be aware of humans alongside the context in which it finds itself. 

Furthermore, the depth of awareness required depends on which AI system you’re talking about. You can imagine that the AI used in CV screening does not need to have an acute sense of other humans compared to facial recognition (especially at Facebook).

However, a rights-based approach can’t do it all on its own.

Ethical principles in the AI space

The importance of privacy, explainability and transparency were rightly explored here, staple products in building trustworthy AI. However, what jumped out at me was how the authors did not advocate for complete transparency. Instead, transparency is to be pursued in the name of fueling explainability, but some information should only be accessible to those in the appropriate positions.

Nevertheless, those in these positions should be both interdisciplinary and diverse.

The importance of universal design

Given AI’s wide-reaching effects, the design should be accessible to all genders, ages and ethnicities. This comes from designing the AI with diversity already in the team, a token of its all-encompassing nature. Furthermore, the ‘common AI fight’ is shown in the paper’s methods for trustworthy AI involving cross-business and cross-sector collaboration. With AI’s impact being both mental and physical, the AI space needs all the collaboration it can get.

Between the lines

While a good introduction into the AI space, I would’ve liked a deeper exploration into the practical side of these approaches. For example, how human intervention in AI processes can be beneficial, rather than having it assumed to be so. Nevertheless, should any human intervention have a chance of success, the correct education would be required. Here, I liked how the paper mentioned AI’s potential call for the educational system to be more job orientated and reflect the state of the world it will be creating. While this may not be the actuality, it will soon convert into a necessity.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • A Sequentially Fair Mechanism for Multiple Sensitive Attributes

    A Sequentially Fair Mechanism for Multiple Sensitive Attributes

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Toward an Ethics of AI Belief

    Toward an Ethics of AI Belief

  • Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in...

    Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in...

  • Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

    Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

  • Handling Bias in Toxic Speech Detection: A Survey

    Handling Bias in Toxic Speech Detection: A Survey

  • The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

    The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

  • A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

    A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

  • Reliabilism and the Testimony of Robots (Research Summary)

    Reliabilism and the Testimony of Robots (Research Summary)

  • “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

    “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.