• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across Computer Science

January 23, 2024

🔬 Research Summary by Rock Yuren Pang, whose focus is on using HCI methods, crowdsourcing, and large language models to support researchers in anticipating the social impact of their work.

[Original paper by Rock Yuren Pang, Dan Grossman, Tadayoshi Kohno, and Katharina Reinecke]


Overview: Computer science research has led to many breakthrough innovations but has also grappled with unintended negative societal repercussions. Prior work showed that CS researchers recognize the value of thinking preemptively about the perils of CS research, but we tend to address them only in hindsight. This paper builds on prior work to propose our vision of facilitating a shift in institutional culture for CS researchers to anticipate the social impact of CS research. 


Introduction

From smart sensors that infringe on our privacy to neural nets that portray realistic imposter deepfakes, our society increasingly bears the burden of negative, if unintended, consequences of computing innovations. The recent flurry of negative media about the adverse effects of technologies is spurring researchers in various computer science disciplines to more commonly examine the ethical implications of their work. However, a critical challenge persists: how can we support CS researchers across many subfields to anticipate these social impacts before they harm society? This paper proposes a vision to reshape academic institutional cultures to navigate and anticipate these social impacts of computing research. The visions have implications for broader academic and industry researchers alike.

Key Insights:

“That’s important, but…”

Our vision builds on prior work that investigated the current attitudes, practices, and barriers when it comes to considering social impacts in advance. For example, factors include the usual (and often true) lack of time argument as our move-fast mentality and deadlines take precedence over all else; the deflection of responsibility to others (e.g., the IRB, other academic fields, those commercializing the research ideas, and other team members); the lack of formal processes and guidelines to think through potential negative effects; and the difficulties accessing diverse perspectives that are crucial to identifying potential impacts. CS researchers usually think about potential undesirable outcomes in hindsight, e.g., after a publication venue requires an ethics statement or after a research innovation raises concerns. Addressing these consequences at such a time is, of course, too late to pivot. Sometimes, the damage cannot be undone. 

Some CS researchers have attempted to alleviate the challenges over the past decades. These approaches require researchers to submit an ethics statement with paper submissions or undergo an Ethics and Society Review before grant proposals. Such efforts raise our awareness about this issue. Still, our prior findings show that these brutal-force actions remain insufficient to make meaningful headway on an intractable problem that computing researchers rarely anticipate and address undesirable consequences systematically in advance.

Our Proposals

Support research subfields across CS: Recent ethics efforts in technology development have often focused on artificial intelligence (AI) instead of encouraging all computing researchers to consider potential societal effects. This focus persists even though many CS subcommunities have seen their share of sometimes severe unintended consequences (See more examples in our paper). Pointing our fingers at AI can risk researchers in other fields feeling that ethical considerations are “someone else’s problem.” It may also lead us to overlook opportunities for holistic improvement across the broader CS field. Instead, we believe that all computing researchers, no matter their subdiscipline, should be supported in learning about what ethics in computing means as well as how to consider unintended consequences in our work proactively.

Encourage early consideration: Considering undesirable consequences could start when formulating the research problem to increase the likelihood that it is still possible to pivot.  There needs to be more time to substantially address the problem and make changes to a research project before the submission or publication stage to reflect on potential social impacts. This is both because people become invested in an idea once they have put in some work and because modifying existing innovations is considerably more time-consuming than doing so early on. 

Encourage regular considerations: Just as the research is a constantly evolving endeavor, considering undesirable consequences can also introduce unforeseen challenges requiring ongoing feedback and re-evaluation. Rather than making it a one-time exercise, researchers should routinely think about the potential societal implications of their work. Achieving this will require changing the institutional culture and support system such that researchers are incentivized to think about societal implications regularly and learn how to do this efficiently. 

Support CS researchers at all levels: Many prior approaches place most of the responsibility on a single person, namely, the person submitting a paper or the PI submitting a proposal. This can lead other research team members, such as undergraduates, graduates, postdoctoral researchers, or other collaborators, to overly rely on this one person. In fact, our prior work suggests that some faculty rely on the experience of their “more ethics-educated” students. In contrast, students may rely on the experience of their senior researchers and faculty, pointing to their experience. To break this cycle of deferred responsibility, every one of us on a research team should play a role and be supported in addressing this issue.

Between the lines

While these action items may appear ambitious, we are actively transforming these ideas into practice as part of the ethics effort at the Allen School of Computer Science & Engineering. Over the coming years, we aim to design, refine, and assess our strategies through iteratively sharing our learnings with the wider CS community. We warmly invite ideas, feedback, and collaboration. The insights from our endeavors will advance institutional practices across many academic environments and within the industry at large.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

related posts

  • Characterizing, Detecting, and Predicting Online Ban Evasion

    Characterizing, Detecting, and Predicting Online Ban Evasion

  • FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

    FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

  • Algorithmic Impact Assessments – What Impact Do They Have?

    Algorithmic Impact Assessments – What Impact Do They Have?

  • The Moral Machine Experiment on Large Language Models

    The Moral Machine Experiment on Large Language Models

  • NATO Artificial Intelligence Strategy

    NATO Artificial Intelligence Strategy

  • UK’s roadmap to AI supremacy: Is the ‘AI War’ heating up?

    UK’s roadmap to AI supremacy: Is the ‘AI War’ heating up?

  • Handling Bias in Toxic Speech Detection: A Survey

    Handling Bias in Toxic Speech Detection: A Survey

  • Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

    Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

  • The State of AI Ethics Report (June 2020)

    The State of AI Ethics Report (June 2020)

  • Slow AI and The Culture of Speed

    Slow AI and The Culture of Speed

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.