• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Roles for Computing in Social Change

August 3, 2020

Summary contributed by Abhishek Gupta, founder of the Montreal AI Ethics Institute.

Link to full paper + authors listed at the bottom.


Mini-summary: This paper highlights the increasing dissonance between computational and policy approaches to addressing social change. Specifically, it calls out how computational approaches are viewed as an exacerbating element to the social ills in society. But, the authors point out how computing might be utilized to focus and direct policymaking to better address social challenges. Specifically, they point to the use of technology as a medium of formalization of social challenges. This methodology brings forth the benefits of making explicit the inputs, outputs, and rules of the system which can create opportunities for intervention. It also has the benefit of translating high-level advocacy work into more concrete, on-the-ground action. 

Computational approaches can also serve as a method for rebuttal: empowering stakeholders to question and contest design and development choices. They present an opportunity to shed new light on existing social issues, thus attracting more resources to redressal mechanisms. From a practitioner’s standpoint, computational approaches provide diagnostic abilities that are useful in producing metrics and outputs that showcase the extent of social problems. While such computational methods don’t absolve practitioners of their responsibilities, it provides them and other stakeholders requisite information in acting on levers that bring about change in the most efficacious manner possible. 


Full summary:

With the impact that technology has on altering societal dynamics, it has led to questions on what role computing should play in social change. In particular, what degree of attention we should pay to use technology as a lever compared to addressing the underlying issues at a social level. 

A lot of technical work in incorporating definitions of fairness, bias, discrimination, privacy, and other endeavors have met with concerns if the way this work happens is the best use of our efforts in addressing these challenges. The authors of this paper raise the point that while technology isn’t a silver bullet in solving problems of social injustice in society, they can still serve as a useful lever. Technical methods can surface evidence and act as a diagnostic tool to better address social issues. 

As an example, such methods have showcased the pervasiveness, depth, and scope of social problems like bias in society against minorities. By quantifying the extent of the problem, it provides us with a prioritized roadmap in addressing issues where there is the most significant impact first. The framing by the authors that these methods don’t absolve practitioners of their responsibilities, instead it offers them the requisite diagnostic abilities to begin addressing the challenges is one that needs to be adopted industry-wide. 

Science and technology studies (STS) already have existing tools to interrogate sociotechnical phenomena, and the authors advocate the use of a blend of computation and STS methods together to come up with holistic solutions. Often the problem with diagnostic results is that the results from those exercises become targets themselves. The authors caution practitioners to craft narratives around those results that drive results and action rather than focussing solely on the results and fall prey to Goodhart’s Law. 

Computation is a formalization mechanism that has the potential to posit clearly what the expectations of actors are while moving away from abstract and vague delegations where actors might apply disparate standards to address social challenges within their systems. Since there is an explicit statement of inputs, outputs, and the rules associated with the system as a part of the formalization process, it presents an opportunity to lay bare the stakes of that system from a social perspective. More importantly, it offers the stakeholders the opportunity to scrutinize and contest the design of the system if it is unjust. 

Advocacy work that calls forth participation from those who are affected usually focuses on how rules are made but not what the rules actually are. The formalized, computational approach offers an intermediate step between high-level calls for transparency and concrete action in the design and development of the systems. This isn’t without challenges. In situations where there is limited data available, the practitioners are constrained in utilizing what they have at hand which might pose harder to solve difficulties. Yet, this might also be the opportunity to call for the collection of alternate or more representative data so that the problem becomes more tractable and solves the issue at hand. 

This discussion can also serve to highlight the limitations of technical methods and the subsequent dependence and limitations of the policies premised on the outputs from these systems. Specifically, a critical examination can drive policymakers to reflect on their decisions such that they don’t exacerbate social issues. While computational methods can highlight how to better use limited resources, it can also shed light on non-computational interventions that repurpose existing resources or advocate for higher resource allocation in the first place rather than a mere optimization of current, limited resources. 

A risk with this approach that the authors identify is that this can shift the discussion away from policy fundamentals to purely technology focussed discussions that look at how to improve technology to better address the problem rather than changing fundamental dynamics in society to address the root cause of the problem. As an example, the alternative to an ill-conceived algorithm might not be a better algorithm, but perhaps no algorithm at all. Computational approaches can act as a synecdoche bringing social issues forward in a new light. For example, the authors point to how inequities in society have gained a boost from the attention that framing some of those challenges from a technological perspective has brought with it. 

With short attention spans and the inherent multivalent nature of large societal problems, traditional policy making takes the approach of chipping away gradually from different angles. A technological spotlight brings in more actors who now attack the problem for several dimensions, lead to greater resource allocation and potentially quicker mitigation. The synecdochal approach treads a fine line between overemphasis and shining a new light. Current society’s obsession with technology can be utilized in a positive manner to drive concrete action on beginning to address the fundamental challenges we face in creating a more just society for all.


Original paper by Rediet Abebe, Salon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G. Robinson: https://arxiv.org/abs/1912.04883

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • RAIN Africa and MAIEI on The Future of Responsible AI in Africa (Public Consultation Summary)

    RAIN Africa and MAIEI on The Future of Responsible AI in Africa (Public Consultation Summary)

  • Research summary: A Focus on Neural Machine Translation for African Languages

    Research summary: A Focus on Neural Machine Translation for African Languages

  • Algorithmic Domination in the Gig Economy

    Algorithmic Domination in the Gig Economy

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • Energy and Policy Considerations in Deep Learning for NLP

    Energy and Policy Considerations in Deep Learning for NLP

  • Supporting Human-LLM collaboration in Auditing LLMs with LLMs

    Supporting Human-LLM collaboration in Auditing LLMs with LLMs

  • The Whiteness of AI (Research Summary)

    The Whiteness of AI (Research Summary)

  • Responsible AI In Healthcare

    Responsible AI In Healthcare

  • Implications of Distance over Redistricting Maps: Central and Outlier Maps

    Implications of Distance over Redistricting Maps: Central and Outlier Maps

  • A Case for AI Safety via Law

    A Case for AI Safety via Law

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.