• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Roles for Computing in Social Change

August 3, 2020

Summary contributed by Abhishek Gupta, founder of the Montreal AI Ethics Institute.

Link to full paper + authors listed at the bottom.


Mini-summary: This paper highlights the increasing dissonance between computational and policy approaches to addressing social change. Specifically, it calls out how computational approaches are viewed as an exacerbating element to the social ills in society. But, the authors point out how computing might be utilized to focus and direct policymaking to better address social challenges. Specifically, they point to the use of technology as a medium of formalization of social challenges. This methodology brings forth the benefits of making explicit the inputs, outputs, and rules of the system which can create opportunities for intervention. It also has the benefit of translating high-level advocacy work into more concrete, on-the-ground action. 

Computational approaches can also serve as a method for rebuttal: empowering stakeholders to question and contest design and development choices. They present an opportunity to shed new light on existing social issues, thus attracting more resources to redressal mechanisms. From a practitioner’s standpoint, computational approaches provide diagnostic abilities that are useful in producing metrics and outputs that showcase the extent of social problems. While such computational methods don’t absolve practitioners of their responsibilities, it provides them and other stakeholders requisite information in acting on levers that bring about change in the most efficacious manner possible. 


Full summary:

With the impact that technology has on altering societal dynamics, it has led to questions on what role computing should play in social change. In particular, what degree of attention we should pay to use technology as a lever compared to addressing the underlying issues at a social level. 

A lot of technical work in incorporating definitions of fairness, bias, discrimination, privacy, and other endeavors have met with concerns if the way this work happens is the best use of our efforts in addressing these challenges. The authors of this paper raise the point that while technology isn’t a silver bullet in solving problems of social injustice in society, they can still serve as a useful lever. Technical methods can surface evidence and act as a diagnostic tool to better address social issues. 

As an example, such methods have showcased the pervasiveness, depth, and scope of social problems like bias in society against minorities. By quantifying the extent of the problem, it provides us with a prioritized roadmap in addressing issues where there is the most significant impact first. The framing by the authors that these methods don’t absolve practitioners of their responsibilities, instead it offers them the requisite diagnostic abilities to begin addressing the challenges is one that needs to be adopted industry-wide. 

Science and technology studies (STS) already have existing tools to interrogate sociotechnical phenomena, and the authors advocate the use of a blend of computation and STS methods together to come up with holistic solutions. Often the problem with diagnostic results is that the results from those exercises become targets themselves. The authors caution practitioners to craft narratives around those results that drive results and action rather than focussing solely on the results and fall prey to Goodhart’s Law. 

Computation is a formalization mechanism that has the potential to posit clearly what the expectations of actors are while moving away from abstract and vague delegations where actors might apply disparate standards to address social challenges within their systems. Since there is an explicit statement of inputs, outputs, and the rules associated with the system as a part of the formalization process, it presents an opportunity to lay bare the stakes of that system from a social perspective. More importantly, it offers the stakeholders the opportunity to scrutinize and contest the design of the system if it is unjust. 

Advocacy work that calls forth participation from those who are affected usually focuses on how rules are made but not what the rules actually are. The formalized, computational approach offers an intermediate step between high-level calls for transparency and concrete action in the design and development of the systems. This isn’t without challenges. In situations where there is limited data available, the practitioners are constrained in utilizing what they have at hand which might pose harder to solve difficulties. Yet, this might also be the opportunity to call for the collection of alternate or more representative data so that the problem becomes more tractable and solves the issue at hand. 

This discussion can also serve to highlight the limitations of technical methods and the subsequent dependence and limitations of the policies premised on the outputs from these systems. Specifically, a critical examination can drive policymakers to reflect on their decisions such that they don’t exacerbate social issues. While computational methods can highlight how to better use limited resources, it can also shed light on non-computational interventions that repurpose existing resources or advocate for higher resource allocation in the first place rather than a mere optimization of current, limited resources. 

A risk with this approach that the authors identify is that this can shift the discussion away from policy fundamentals to purely technology focussed discussions that look at how to improve technology to better address the problem rather than changing fundamental dynamics in society to address the root cause of the problem. As an example, the alternative to an ill-conceived algorithm might not be a better algorithm, but perhaps no algorithm at all. Computational approaches can act as a synecdoche bringing social issues forward in a new light. For example, the authors point to how inequities in society have gained a boost from the attention that framing some of those challenges from a technological perspective has brought with it. 

With short attention spans and the inherent multivalent nature of large societal problems, traditional policy making takes the approach of chipping away gradually from different angles. A technological spotlight brings in more actors who now attack the problem for several dimensions, lead to greater resource allocation and potentially quicker mitigation. The synecdochal approach treads a fine line between overemphasis and shining a new light. Current society’s obsession with technology can be utilized in a positive manner to drive concrete action on beginning to address the fundamental challenges we face in creating a more just society for all.


Original paper by Rediet Abebe, Salon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G. Robinson: https://arxiv.org/abs/1912.04883

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

    Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

  • Research summary: Classical Ethics in A/IS

    Research summary: Classical Ethics in A/IS

  • Research summary: PolicyKit: Building Governance in Online Communities

    Research summary: PolicyKit: Building Governance in Online Communities

  • Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

    Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

  • Fairness Amidst Non-IID Graph Data: A Literature Review

    Fairness Amidst Non-IID Graph Data: A Literature Review

  • Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial ...

    Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial ...

  • Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

    Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

  • When Algorithms Infer Pregnancy or Other Sensitive Information About People

    When Algorithms Infer Pregnancy or Other Sensitive Information About People

  • Introduction To Ethical AI Principles

    Introduction To Ethical AI Principles

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.