• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society

March 9, 2020

A much needed paper by Carina Prunkl and Jess Whittlestone shedding light in a polarized research and practice community that can clearly benefit from more collaboration and greater understanding of each other’s work. The paper proposes a multi dimensional spectrum based approach to delineate near and long term AI research along the lines of extremity, capability, certainty and impact. Additionally, it asks for more rigour from the community when communicating their research agendas and motives to allow for greater understanding between this artificial divide. While elucidating differences along these different axes and visualizing them reveals how misunderstanding arises, it also highlights ignored yet important research areas, ones that the authors are focused on.

This paper dives into how researchers can clearly communicate about their research agendas given ambiguities in the split of the AI Ethics community into near and long term research. Often a sore and contentious point of discussion, there is an artificial divide between the two groups that seem to take a reductionist approach to the work being done by the other. A major problem emerging from such a divide is a hindrance in being able to spot relevant work being done by the different communities and thus affecting effective collaboration. The paper highlights the differences arising primarily along the lines of timescale, AI capabilities, deeper normative and empirical disagreements. 

The paper provides for a helpful distinction between near- and long-term by describing them as follows: 

  • Near term issues are those that are fairly well understood and have concrete examples and relate to rĂŞvent progress in the field of machine learning 
  • Long term issues are those that might arise far into the future and due to much more advanced AI systems with broad capabilities, it also includes long term impacts like international security, race relations, and power dynamics

What they currently see is that: 

  • Issues considered ‘near-term’ tend to be those arising in the present/near future as a result of current/foreseeable AI systems and capabilities, on varying levels of scale/severity, which mostly have immediate consequences for people and society. 
  • Issues considered ‘long-term’ tend to be those arising far into the future as a result of large advances in AI capabilities (with a particular focus on notions of transformative AI or AGI), and those that are likely to pose risks that are severe/large in scale with very long-term consequences.
  • The binary clusters are not sufficient as a way to split the field and not looking at underlying beliefs leads to unfounded assumptions about each other’s work
  • In addition there might be areas between the near and long term that might be neglected as a result of this artificial fractions 

Unpacking these distinctions can be done along the lines of capabilities, extremity, certainty and impact, definitions for which are provided in the paper. A key contribution aside from identifying these factors is that they lie along a spectrum and define a possibility space using them as dimensions which helps to identify where research is currently concentrated and what areas are being ignored. It also helps to well position the work being done by these authors. 

Something that we really appreciated from this work was the fact that it gives us concrete language and tools to more effectively communicate about each other’s work. As part of our efforts in building communities that leverage diverse experiences and backgrounds to tackle an inherently complex and muti-dimensional problem, we deeply appreciate how challenging yet rewarding such an effort can be. Some of the most meaningful public consultation work done by MAIEI leveraged our internalized framework in a similar vein to provide value to the process that led to outcomes like the Montreal Declaration for Responsible AI.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Japan’s AI Promotion Act

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

related posts

  • The Ethical Implications of Generative Audio Models: A Systematic Literature Review

    The Ethical Implications of Generative Audio Models: A Systematic Literature Review

  • Generative AI-Driven Storytelling: A New Era for Marketing

    Generative AI-Driven Storytelling: A New Era for Marketing

  • Is the Human Being Lost in the Hiring Process?

    Is the Human Being Lost in the Hiring Process?

  • AI Ethics: Enter the Dragon!

    AI Ethics: Enter the Dragon!

  • South Korea as a Fourth Industrial Revolution Middle Power?

    South Korea as a Fourth Industrial Revolution Middle Power?

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

    Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

  • Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

    Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

  • “Cold Hard Data” – Nothing Cold or Hard About It

    “Cold Hard Data” – Nothing Cold or Hard About It

  • From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

    From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.