• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

February 2, 2021


🔬 Research summary contributed by the original authors of the paper: Charlotte Stix (@charlotte_stix) and Matthijs Maas (@matthijsMmaas).

Charlotte is a PhD researcher at the Eindhoven University of Technology, and Matthijs is a research associate at the University of Cambridge’s Centre for the Study of Existential Risk.

[Link to original paper at the bottom]


Overview: In this paper, Charlotte Stix and Matthijs Maas argue for more collaboration between those focused on ‘near-term’ and ‘long-term’ problems in AI ethics and policy. They argue that such collaboration was key to the policy success of past epistemic communities. They suggest that researchers in these two communities disagree on fewer overarching points than they think; and where they do, they can and should bridge underlying theoretical disagreements. In doing so, the authors propose to draw on the principle of an ‘incompletely theorized agreement’, which can support urgently-needed cooperation on projects or areas of mutual interest, which can support the pursuit of responsible and beneficial AI for both the near- and long-term.


How can researchers concerned about AI’s societal impact step beyond background disagreements in the field, in order to facilitate greater cooperation by the research community in responding to AI’s urgent policy challenges?

Ongoing progress in AI has raised a diverse array of ethical and societal concerns. These are in need of urgent policy action. While there has been a wave of scholarship and advocacy in the field, the research community has at times appeared somewhat divided amongst those who emphasize ‘near-term’ concerns (such as facial recognition and algorithmic bias), and those focusing on ‘long-term’ concerns (concerning the potentially ‘transformative’ implications of future, more capable AI systems). In recent years, there have been increasing calls for greater reconciliation, cooperation, and clarifying dialogue between these sub-communities in the ‘AI ethics and society’ research space. 

In their paper, Stix and Maas seek to understand the sources and consequences of this ‘gap’ amongst these communities, in order to chart the practical space and underpinnings for greater inter-community collaboration on AI policy. 

Why does this matter? Critically, how the responsible AI policy community is organized, and how it interacts amongst itself and towards external stakeholders, should matter greatly to all its members. Diverse historical cases of conflict or collaboration in scientific communities–such as in nanotech, biotech, and ballistic missile defense arms control–illustrate how coordinated ‘epistemic community’ action can achieve remarkable policy goals, while sustained fragmentation can severely undercut researchers’ ability to advocate for and secure progress on key policies. 

Moreover, it appears particularly urgent to address or bypass unnecessary fragmentation in the AI policy community sooner rather than later. The field of AI policy may currently be in a window of opportunity and flexibility–in terms of problem framings, public attention, policy instrument choice and design–which may steadily close in coming years, as perceptions, framings, and policy agendas become more locked in. A divided community which treats policymaker- or public attention as a zero-sum good for competing policy projects may inadvertently ‘poison the well’ for later efforts, if it becomes perceived as a series of interest groups rather than an ‘epistemic community’ with a multi-faceted but coherent agenda for beneficial societal impact of AI.

Furthermore, while there are certainly real and important areas of disagreement amongst the communities, these do not in fact neatly fall into a clear ‘near-term’ and ‘long-term’ camp. Instead, it is possible and not uncommon to hold overlapping and more nuanced positions across a range of questions and debates. These include epistemic positions on how to engage with future uncertainty around AI, and different types of evidence and argument. However, these also includes more pragmatic differences of opinion over the (in)tractability of formulating meaningful policies today which will be or remain relevant into the future. However, on critical reflection, many of these perceived disagreements are not all that strong, and need not pose a barrier to inter-community cooperation on AI policy.

However, are there in fact positive, mutually productive opportunities for both communities to work on? What would such an agreement look like? The authors propose to adapt the constitutional law principle of an ‘incompletely theorized agreement’ to ground practical policy action amongst these communities, even in the face of underlying disagreement. The key value of incompletely theorized agreements is that they allow a community to bypass or suspend theoretical disagreement on topics where (1) that disagreement appears relatively intractable given the available information, and (2) there is an urgent need to address certain shared practical issues in the meantime. Incompletely theorized agreements have been a core component to well-functioning legal systems, societies, and communities, because they allow for both stability, as well as flexibility to move forward on urgent issues. Indeed, it has been argued that this principle has underpinned landmark achievements in global governance, such as the Universal Declaration of Human Rights.

There are a range of issue areas where both ‘near-term’ and ‘long-term’ AI ethics scholars could draw on this principle to converge on questions both want addressed, or shared policy goals which they value. Without aiming to be comprehensive, potential sites for productive and shared collaboration could include; (1) research to gain insight into- and leverage on general levers of (national or international) policy formation on AI; (2) investigation into the relative efficacy of various policy levers for AI governance (e.g. codes of ethics; publication norms; auditing systems; publicly naming problematic performance); (3) establishing an appropriate scientific culture for considering the impact and dissemination of AI research; (4) policy interventions aimed at preserving the integrity of public discourse and informed decision-making in the face of AI systems; (5) exploring the question of ‘social value alignment’–how to align AI systems with the plurality of values endorsed by groups of people. For each of these projects, although the underlying reasons might be distinct, both communities would gain from these policies.

That is not to suggest that incompletely theorized agreements are an unambiguously valuable solution across all AI policy contexts. Such agreements are by their nature imperfect, and can be ‘brittle’ to changing conditions. Nonetheless, while limitations such as these should be considered in greater detail, they do not erode the case for implementing, or at least further exploring the promise of well-tailored incompletely theorized agreements for advancing responsible AI policies to support responsible and beneficial AI for both the near- and long-term.


Original paper by Charlotte Stix and Matthijs Maas: https://link.springer.com/article/10.1007/s43681-020-00037-w

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • The Role of Relevance in Fair Ranking

    The Role of Relevance in Fair Ranking

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • AI and Marketing: Why We Need to Ask Ethical Questions

    AI and Marketing: Why We Need to Ask Ethical Questions

  • Does diversity really go well with Large Language Models?

    Does diversity really go well with Large Language Models?

  • AI Chatbots: The Future of Socialization

    AI Chatbots: The Future of Socialization

  • The State of AI Ethics Report (Jan 2021)

    The State of AI Ethics Report (Jan 2021)

  • Brave: what it means to be an AI Ethicist

    Brave: what it means to be an AI Ethicist

  • Can we trust robots?

    Can we trust robots?

  • A Case Study: Increasing AI Ethics Maturity in a Startup

    A Case Study: Increasing AI Ethics Maturity in a Startup

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.