• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Algorithmic Accountability

August 26, 2020

Summary contributed by Falaah Arif Khan, our Artist in Residence. She creates art exploring tech, including comics related to AI.

Link to full paper + authors listed at the bottom.


Mini-summary: Considering algorithms are being used in high stakes situations and that public adoption is essential for adoption, it is essential that we figure out a way to make algorithms accountable. We can do this by improving trustworthiness of algorithms, moving away from negative screening toward pushing for algorithms that can impact society positively. This has varying implications for 3 primary groups of stakeholders: practitioners, regulators, and the public sector.

Full summary:

The public trust in algorithmic decision-making systems is at an all-time low. In the last 6 months we’ve seen three major companies abandon their pursuit of general-purpose Facial Recognition software and as of this week, a government scrapping the results of an Exam Grading algorithm. Given the immense economic interest in and the rapid adoption of this (arguably) budding technology,  AI fiascos have become commonplace in recent times, but what is truly groundbreaking this time around is the power of the public in forcing the hands of powerful companies and governments to disavow this technology in settings where it’s performance has been sub-par.

With this social backdrop, it serves us to remember some of the guiding principles for creating Accountable Algorithms laid out by Hetan Shah in his op-ed from 2018. In this piece, Shah argues that it is crucial to build public trust in a new piece of technology before pushing for its widespread adoption. He contrasts the speedy adoption of Stem Cell technology, that had a great deal of careful public dialogue around it during its development, with the paralyzing impact of public pushback on the advancement of genetic modification technologies. He then underscores that the best way to build trust is to implicitly improve the trustworthiness of algorithms, rather than explicitly pushing for more public trust.

Given the gravitas of the situation, there needs to be a coordinated effort by all the different stakeholders; namely: the practitioners (research labs, industry), the public sector and the policy-makers/regulators.

Practitioners need to be more conscientious about the creation of open source benchmarks by improving the diversity and representation in datasets. This is critical since benchmarks navigate the research direction of entire communities and so bias needs to be eradicated at its root. Shah also recommends piloting any model before it is deployed, using multiple datasets. In settings where in-house expertise is lacking, companies could engage bodies such as the Algorithmic Justice League to audit biases in models. Another approach could be to monitor for differential impacts, specially on vulnerable demographics, using causal models and counterfactuals. While Shah agrees that transparency would help only in a limited capacity, he recommends publishing models and the associated data and meta-data.

In addition to technical solutions, Shah outlines some important process changes that companies can make, such as improving diversity in the workforce, conducting ethics training and enforcing a professional code of conduct.

Regulation also has a key role in building trust. Following suit with the EU’s GDPR, provisions such as the right to challenge an unfair decision from an algorithmic system and the right to redress, would go a long way in confirming a commitment to mitigating the negative effects of misbehaving models. Other recommendations from Shah include building capacity for regulators to be able to understand and close gaps across the several sectors in which algorithms are driving decisions.

Lastly, the public sector holds a key role in building trust. Shah envisions the emergence of a Data Commons, in which the ultimate ownership of data would go back to the public. This would allow for a much-needed balance of power, where the public could do away with exclusive contracts and have the bargaining power to enforce high standards of accountability and transparency from any contractor that wishes to use their data to create predictive models.

Keeping pace with evolving technology is an uphill battle, but it is one we must take on. As Shah eloquently argues in this piece, we need to approach the creation of accountable algorithms by pushing for systems that can positively impact society, and do away with our extant approach of negative screening/mitigating damage. 


Original paper by Hetan Shah: https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2017.0362

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Computer vision and sustainability

    Computer vision and sustainability

  • Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

    Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

  • Foundations for the future: institution building for the purpose of artificial intelligence governan...

    Foundations for the future: institution building for the purpose of artificial intelligence governan...

  • Research summary: The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand

    Research summary: The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-...

    AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-...

  • Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

    Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

  • Broadening AI Ethics Narratives: An Indic Art View

    Broadening AI Ethics Narratives: An Indic Art View

  • The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

    The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

  • Applying the TAII Framework on Tesla Bot

    Applying the TAII Framework on Tesla Bot

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.