• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Roles for Computing in Social Change

August 3, 2020

Summary contributed by Abhishek Gupta, founder of the Montreal AI Ethics Institute.

Link to full paper + authors listed at the bottom.


Mini-summary: This paper highlights the increasing dissonance between computational and policy approaches to addressing social change. Specifically, it calls out how computational approaches are viewed as an exacerbating element to the social ills in society. But, the authors point out how computing might be utilized to focus and direct policymaking to better address social challenges. Specifically, they point to the use of technology as a medium of formalization of social challenges. This methodology brings forth the benefits of making explicit the inputs, outputs, and rules of the system which can create opportunities for intervention. It also has the benefit of translating high-level advocacy work into more concrete, on-the-ground action. 

Computational approaches can also serve as a method for rebuttal: empowering stakeholders to question and contest design and development choices. They present an opportunity to shed new light on existing social issues, thus attracting more resources to redressal mechanisms. From a practitioner’s standpoint, computational approaches provide diagnostic abilities that are useful in producing metrics and outputs that showcase the extent of social problems. While such computational methods don’t absolve practitioners of their responsibilities, it provides them and other stakeholders requisite information in acting on levers that bring about change in the most efficacious manner possible. 


Full summary:

With the impact that technology has on altering societal dynamics, it has led to questions on what role computing should play in social change. In particular, what degree of attention we should pay to use technology as a lever compared to addressing the underlying issues at a social level. 

A lot of technical work in incorporating definitions of fairness, bias, discrimination, privacy, and other endeavors have met with concerns if the way this work happens is the best use of our efforts in addressing these challenges. The authors of this paper raise the point that while technology isn’t a silver bullet in solving problems of social injustice in society, they can still serve as a useful lever. Technical methods can surface evidence and act as a diagnostic tool to better address social issues. 

As an example, such methods have showcased the pervasiveness, depth, and scope of social problems like bias in society against minorities. By quantifying the extent of the problem, it provides us with a prioritized roadmap in addressing issues where there is the most significant impact first. The framing by the authors that these methods don’t absolve practitioners of their responsibilities, instead it offers them the requisite diagnostic abilities to begin addressing the challenges is one that needs to be adopted industry-wide. 

Science and technology studies (STS) already have existing tools to interrogate sociotechnical phenomena, and the authors advocate the use of a blend of computation and STS methods together to come up with holistic solutions. Often the problem with diagnostic results is that the results from those exercises become targets themselves. The authors caution practitioners to craft narratives around those results that drive results and action rather than focussing solely on the results and fall prey to Goodhart’s Law. 

Computation is a formalization mechanism that has the potential to posit clearly what the expectations of actors are while moving away from abstract and vague delegations where actors might apply disparate standards to address social challenges within their systems. Since there is an explicit statement of inputs, outputs, and the rules associated with the system as a part of the formalization process, it presents an opportunity to lay bare the stakes of that system from a social perspective. More importantly, it offers the stakeholders the opportunity to scrutinize and contest the design of the system if it is unjust. 

Advocacy work that calls forth participation from those who are affected usually focuses on how rules are made but not what the rules actually are. The formalized, computational approach offers an intermediate step between high-level calls for transparency and concrete action in the design and development of the systems. This isn’t without challenges. In situations where there is limited data available, the practitioners are constrained in utilizing what they have at hand which might pose harder to solve difficulties. Yet, this might also be the opportunity to call for the collection of alternate or more representative data so that the problem becomes more tractable and solves the issue at hand. 

This discussion can also serve to highlight the limitations of technical methods and the subsequent dependence and limitations of the policies premised on the outputs from these systems. Specifically, a critical examination can drive policymakers to reflect on their decisions such that they don’t exacerbate social issues. While computational methods can highlight how to better use limited resources, it can also shed light on non-computational interventions that repurpose existing resources or advocate for higher resource allocation in the first place rather than a mere optimization of current, limited resources. 

A risk with this approach that the authors identify is that this can shift the discussion away from policy fundamentals to purely technology focussed discussions that look at how to improve technology to better address the problem rather than changing fundamental dynamics in society to address the root cause of the problem. As an example, the alternative to an ill-conceived algorithm might not be a better algorithm, but perhaps no algorithm at all. Computational approaches can act as a synecdoche bringing social issues forward in a new light. For example, the authors point to how inequities in society have gained a boost from the attention that framing some of those challenges from a technological perspective has brought with it. 

With short attention spans and the inherent multivalent nature of large societal problems, traditional policy making takes the approach of chipping away gradually from different angles. A technological spotlight brings in more actors who now attack the problem for several dimensions, lead to greater resource allocation and potentially quicker mitigation. The synecdochal approach treads a fine line between overemphasis and shining a new light. Current society’s obsession with technology can be utilized in a positive manner to drive concrete action on beginning to address the fundamental challenges we face in creating a more just society for all.


Original paper by Rediet Abebe, Salon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G. Robinson: https://arxiv.org/abs/1912.04883

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • The Role of Arts in Shaping AI Ethics

    The Role of Arts in Shaping AI Ethics

  • Hiring Algorithms Based on Junk Science May Cost You Your Dream Job

    Hiring Algorithms Based on Junk Science May Cost You Your Dream Job

  • Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

    Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

  • Against Interpretability: a Critical Examination

    Against Interpretability: a Critical Examination

  • Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against...

    Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against...

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

  • Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

    Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

  • AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

    AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

  • Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act

    Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.