• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Trustworthiness of Artificial Intelligence

November 11, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Sonali Jain, Shagun Sharma, Manan Luthra, Mehtab Fatima]


Overview: If you are new to the space of AI Ethics, this is the paper for you. Offering a wide coverage of the issues that enter into the debate, AI governance and how we build trustworthy AI are explored by the authors.


Introduction

One of the strengths of this paper is how it proves a productive introduction for those who are new to the AI Ethics space. Touching upon governance (as we have done), how we create trustworthy AI is explored. What we mean by ā€˜trustworthy’ is open for review, but some aspects must enter the debate. Three of these are highlighted below.

Key Insights

The authors appeal to how AI should be compliant in the following 3 ways:

1. Lawful: The AI system should be compliant with various rules and laws. 

2. Ethical: It should contain morals and ethics and adhere to moral values and principles. 

3. Robust: AI should be sturdy in both social and technical sense.

How AI can be made lawful: A rights approach to AI

The benefit of such an approach is its ability to put humanity at the centre of AI considerations while maintaining respect for human dignity. One example of how this works is the right to freedom from coercion. Focused on preventing manipulation, laws such as the California Law try to make sure that ā€œAI systems must not in any case dominate, force, deceive or manipulate human beingsā€ (p.g. 908).

The approach becomes even more intriguing when applied to harm. Often, AI systems are said to be designed not to harm humans. While being an intuitive claim, such an approach does require the AI to be aware of humans alongside the context in which it finds itself. 

Furthermore, the depth of awareness required depends on which AI system you’re talking about. You can imagine that the AI used in CV screening does not need to have an acute sense of other humans compared to facial recognition (especially at Facebook).

However, a rights-based approach can’t do it all on its own.

Ethical principles in the AI space

The importance of privacy, explainability and transparency were rightly explored here, staple products in building trustworthy AI. However, what jumped out at me was how the authors did not advocate for complete transparency. Instead, transparency is to be pursued in the name of fueling explainability, but some information should only be accessible to those in the appropriate positions.

Nevertheless, those in these positions should be both interdisciplinary and diverse.

The importance of universal design

Given AI’s wide-reaching effects, the design should be accessible to all genders, ages and ethnicities. This comes from designing the AI with diversity already in the team, a token of its all-encompassing nature. Furthermore, the ā€˜common AI fight’ is shown in the paper’s methods for trustworthy AI involving cross-business and cross-sector collaboration. With AI’s impact being both mental and physical, the AI space needs all the collaboration it can get.

Between the lines

While a good introduction into the AI space, I would’ve liked a deeper exploration into the practical side of these approaches. For example, how human intervention in AI processes can be beneficial, rather than having it assumed to be so. Nevertheless, should any human intervention have a chance of success, the correct education would be required. Here, I liked how the paper mentioned AI’s potential call for the educational system to be more job orientated and reflect the state of the world it will be creating. While this may not be the actuality, it will soon convert into a necessity.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Responsible Use of Technology in Credit Reporting: White Paper

    Responsible Use of Technology in Credit Reporting: White Paper

  • The Social Contract for AI

    The Social Contract for AI

  • Aging with AI: Another Source of Bias?

    Aging with AI: Another Source of Bias?

  • Responsible sourcing and the professionalization of data work

    Responsible sourcing and the professionalization of data work

  • How Kathleen Siminyu created Kenya’s go-to space for Women in Machine Learning

    How Kathleen Siminyu created Kenya’s go-to space for Women in Machine Learning

  • Editing Personality for LLMs

    Editing Personality for LLMs

  • Two Decades of Empirical Research on Trust in AI: A Bibliometric Analysis and HCI Research Agenda

    Two Decades of Empirical Research on Trust in AI: A Bibliometric Analysis and HCI Research Agenda

  • Fairness Uncertainty Quantification: How certain are you that the model is fair?

    Fairness Uncertainty Quantification: How certain are you that the model is fair?

  • ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

    ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

  • The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

    The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.