• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society

March 9, 2020

A much needed paper by Carina Prunkl and Jess Whittlestone shedding light in a polarized research and practice community that can clearly benefit from more collaboration and greater understanding of each other’s work. The paper proposes a multi dimensional spectrum based approach to delineate near and long term AI research along the lines of extremity, capability, certainty and impact. Additionally, it asks for more rigour from the community when communicating their research agendas and motives to allow for greater understanding between this artificial divide. While elucidating differences along these different axes and visualizing them reveals how misunderstanding arises, it also highlights ignored yet important research areas, ones that the authors are focused on.

This paper dives into how researchers can clearly communicate about their research agendas given ambiguities in the split of the AI Ethics community into near and long term research. Often a sore and contentious point of discussion, there is an artificial divide between the two groups that seem to take a reductionist approach to the work being done by the other. A major problem emerging from such a divide is a hindrance in being able to spot relevant work being done by the different communities and thus affecting effective collaboration. The paper highlights the differences arising primarily along the lines of timescale, AI capabilities, deeper normative and empirical disagreements.Ā 

The paper provides for a helpful distinction between near- and long-term by describing them as follows:Ā 

  • Near term issues are those that are fairly well understood and have concrete examples and relate to rĆŖvent progress in the field of machine learning 
  • Long term issues are those that might arise far into the future and due to much more advanced AI systems with broad capabilities, it also includes long term impacts like international security, race relations, and power dynamics

What they currently see is that:Ā 

  • Issues considered ā€˜near-term’ tend to be those arising in the present/near future as a result of current/foreseeable AI systems and capabilities, on varying levels of scale/severity, which mostly have immediate consequences for people and society. 
  • Issues considered ā€˜long-term’ tend to be those arising far into the future as a result of large advances in AI capabilities (with a particular focus on notions of transformative AI or AGI), and those that are likely to pose risks that are severe/large in scale with very long-term consequences.
  • The binary clusters are not sufficient as a way to split the field and not looking at underlying beliefs leads to unfounded assumptions about each other’s work
  • In addition there might be areas between the near and long term that might be neglected as a result of this artificial fractions 

Unpacking these distinctions can be done along the lines of capabilities, extremity, certainty and impact, definitions for which are provided in the paper. A key contribution aside from identifying these factors is that they lie along a spectrum and define a possibility space using them as dimensions which helps to identify where research is currently concentrated and what areas are being ignored. It also helps to well position the work being done by these authors.Ā 

Something that we really appreciated from this work was the fact that it gives us concrete language and tools to more effectively communicate about each other’s work. As part of our efforts in building communities that leverage diverse experiences and backgrounds to tackle an inherently complex and muti-dimensional problem, we deeply appreciate how challenging yet rewarding such an effort can be. Some of the most meaningful public consultation work done by MAIEI leveraged our internalized framework in a similar vein to provide value to the process that led to outcomes like the Montreal Declaration for Responsible AI.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Research summary: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelli...

    Research summary: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelli...

  • Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

    Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • Against Interpretability: a Critical Examination

    Against Interpretability: a Critical Examination

  • Let Users Decide: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

    Let Users Decide: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

  • Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

    Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

  • Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

    Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

    Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

  • A hunt for the Snark: Annotator Diversity in Data Practices

    A hunt for the Snark: Annotator Diversity in Data Practices

  • AI Ethics and Ordoliberalism 2.0: Towards A ā€˜Digital Bill of Rights

    AI Ethics and Ordoliberalism 2.0: Towards A ā€˜Digital Bill of Rights

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.