• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

February 2, 2021


🔬 Research summary contributed by the original authors of the paper: Charlotte Stix (@charlotte_stix) and Matthijs Maas (@matthijsMmaas).

Charlotte is a PhD researcher at the Eindhoven University of Technology, and Matthijs is a research associate at the University of Cambridge’s Centre for the Study of Existential Risk.

[Link to original paper at the bottom]


Overview: In this paper, Charlotte Stix and Matthijs Maas argue for more collaboration between those focused on ‘near-term’ and ‘long-term’ problems in AI ethics and policy. They argue that such collaboration was key to the policy success of past epistemic communities. They suggest that researchers in these two communities disagree on fewer overarching points than they think; and where they do, they can and should bridge underlying theoretical disagreements. In doing so, the authors propose to draw on the principle of an ‘incompletely theorized agreement’, which can support urgently-needed cooperation on projects or areas of mutual interest, which can support the pursuit of responsible and beneficial AI for both the near- and long-term.


How can researchers concerned about AI’s societal impact step beyond background disagreements in the field, in order to facilitate greater cooperation by the research community in responding to AI’s urgent policy challenges?

Ongoing progress in AI has raised a diverse array of ethical and societal concerns. These are in need of urgent policy action. While there has been a wave of scholarship and advocacy in the field, the research community has at times appeared somewhat divided amongst those who emphasize ‘near-term’ concerns (such as facial recognition and algorithmic bias), and those focusing on ‘long-term’ concerns (concerning the potentially ‘transformative’ implications of future, more capable AI systems). In recent years, there have been increasing calls for greater reconciliation, cooperation, and clarifying dialogue between these sub-communities in the ‘AI ethics and society’ research space. 

In their paper, Stix and Maas seek to understand the sources and consequences of this ‘gap’ amongst these communities, in order to chart the practical space and underpinnings for greater inter-community collaboration on AI policy. 

Why does this matter? Critically, how the responsible AI policy community is organized, and how it interacts amongst itself and towards external stakeholders, should matter greatly to all its members. Diverse historical cases of conflict or collaboration in scientific communities–such as in nanotech, biotech, and ballistic missile defense arms control–illustrate how coordinated ‘epistemic community’ action can achieve remarkable policy goals, while sustained fragmentation can severely undercut researchers’ ability to advocate for and secure progress on key policies. 

Moreover, it appears particularly urgent to address or bypass unnecessary fragmentation in the AI policy community sooner rather than later. The field of AI policy may currently be in a window of opportunity and flexibility–in terms of problem framings, public attention, policy instrument choice and design–which may steadily close in coming years, as perceptions, framings, and policy agendas become more locked in. A divided community which treats policymaker- or public attention as a zero-sum good for competing policy projects may inadvertently ‘poison the well’ for later efforts, if it becomes perceived as a series of interest groups rather than an ‘epistemic community’ with a multi-faceted but coherent agenda for beneficial societal impact of AI.

Furthermore, while there are certainly real and important areas of disagreement amongst the communities, these do not in fact neatly fall into a clear ‘near-term’ and ‘long-term’ camp. Instead, it is possible and not uncommon to hold overlapping and more nuanced positions across a range of questions and debates. These include epistemic positions on how to engage with future uncertainty around AI, and different types of evidence and argument. However, these also includes more pragmatic differences of opinion over the (in)tractability of formulating meaningful policies today which will be or remain relevant into the future. However, on critical reflection, many of these perceived disagreements are not all that strong, and need not pose a barrier to inter-community cooperation on AI policy.

However, are there in fact positive, mutually productive opportunities for both communities to work on? What would such an agreement look like? The authors propose to adapt the constitutional law principle of an ‘incompletely theorized agreement’ to ground practical policy action amongst these communities, even in the face of underlying disagreement. The key value of incompletely theorized agreements is that they allow a community to bypass or suspend theoretical disagreement on topics where (1) that disagreement appears relatively intractable given the available information, and (2) there is an urgent need to address certain shared practical issues in the meantime. Incompletely theorized agreements have been a core component to well-functioning legal systems, societies, and communities, because they allow for both stability, as well as flexibility to move forward on urgent issues. Indeed, it has been argued that this principle has underpinned landmark achievements in global governance, such as the Universal Declaration of Human Rights.

There are a range of issue areas where both ‘near-term’ and ‘long-term’ AI ethics scholars could draw on this principle to converge on questions both want addressed, or shared policy goals which they value. Without aiming to be comprehensive, potential sites for productive and shared collaboration could include; (1) research to gain insight into- and leverage on general levers of (national or international) policy formation on AI; (2) investigation into the relative efficacy of various policy levers for AI governance (e.g. codes of ethics; publication norms; auditing systems; publicly naming problematic performance); (3) establishing an appropriate scientific culture for considering the impact and dissemination of AI research; (4) policy interventions aimed at preserving the integrity of public discourse and informed decision-making in the face of AI systems; (5) exploring the question of ‘social value alignment’–how to align AI systems with the plurality of values endorsed by groups of people. For each of these projects, although the underlying reasons might be distinct, both communities would gain from these policies.

That is not to suggest that incompletely theorized agreements are an unambiguously valuable solution across all AI policy contexts. Such agreements are by their nature imperfect, and can be ‘brittle’ to changing conditions. Nonetheless, while limitations such as these should be considered in greater detail, they do not erode the case for implementing, or at least further exploring the promise of well-tailored incompletely theorized agreements for advancing responsible AI policies to support responsible and beneficial AI for both the near- and long-term.


Original paper by Charlotte Stix and Matthijs Maas: https://link.springer.com/article/10.1007/s43681-020-00037-w

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • In 2020, Nobody Knows You’re a Chatbot

    In 2020, Nobody Knows You’re a Chatbot

  • Science Communications for Explainable Artificial Intelligence

    Science Communications for Explainable Artificial Intelligence

  • Implications of the use of artificial intelligence in public governance: A systematic literature rev...

    Implications of the use of artificial intelligence in public governance: A systematic literature rev...

  • Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

    Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

  • Not Quite ‘Ask a Librarian’: AI on the Nature, Value, and Future of LIS

    Not Quite ‘Ask a Librarian’: AI on the Nature, Value, and Future of LIS

  • Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

    Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • Governance by Algorithms (Research Summary)

    Governance by Algorithms (Research Summary)

  • The Ethical Need for Watermarks in Machine-Generated Language

    The Ethical Need for Watermarks in Machine-Generated Language

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.