• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

February 2, 2021


🔬 Research summary contributed by the original authors of the paper: Charlotte Stix (@charlotte_stix) and Matthijs Maas (@matthijsMmaas).

Charlotte is a PhD researcher at the Eindhoven University of Technology, and Matthijs is a research associate at the University of Cambridge’s Centre for the Study of Existential Risk.

[Link to original paper at the bottom]


Overview: In this paper, Charlotte Stix and Matthijs Maas argue for more collaboration between those focused on ‘near-term’ and ‘long-term’ problems in AI ethics and policy. They argue that such collaboration was key to the policy success of past epistemic communities. They suggest that researchers in these two communities disagree on fewer overarching points than they think; and where they do, they can and should bridge underlying theoretical disagreements. In doing so, the authors propose to draw on the principle of an ‘incompletely theorized agreement’, which can support urgently-needed cooperation on projects or areas of mutual interest, which can support the pursuit of responsible and beneficial AI for both the near- and long-term.


How can researchers concerned about AI’s societal impact step beyond background disagreements in the field, in order to facilitate greater cooperation by the research community in responding to AI’s urgent policy challenges?

Ongoing progress in AI has raised a diverse array of ethical and societal concerns. These are in need of urgent policy action. While there has been a wave of scholarship and advocacy in the field, the research community has at times appeared somewhat divided amongst those who emphasize ‘near-term’ concerns (such as facial recognition and algorithmic bias), and those focusing on ‘long-term’ concerns (concerning the potentially ‘transformative’ implications of future, more capable AI systems). In recent years, there have been increasing calls for greater reconciliation, cooperation, and clarifying dialogue between these sub-communities in the ‘AI ethics and society’ research space. 

In their paper, Stix and Maas seek to understand the sources and consequences of this ‘gap’ amongst these communities, in order to chart the practical space and underpinnings for greater inter-community collaboration on AI policy. 

Why does this matter? Critically, how the responsible AI policy community is organized, and how it interacts amongst itself and towards external stakeholders, should matter greatly to all its members. Diverse historical cases of conflict or collaboration in scientific communities–such as in nanotech, biotech, and ballistic missile defense arms control–illustrate how coordinated ‘epistemic community’ action can achieve remarkable policy goals, while sustained fragmentation can severely undercut researchers’ ability to advocate for and secure progress on key policies. 

Moreover, it appears particularly urgent to address or bypass unnecessary fragmentation in the AI policy community sooner rather than later. The field of AI policy may currently be in a window of opportunity and flexibility–in terms of problem framings, public attention, policy instrument choice and design–which may steadily close in coming years, as perceptions, framings, and policy agendas become more locked in. A divided community which treats policymaker- or public attention as a zero-sum good for competing policy projects may inadvertently ‘poison the well’ for later efforts, if it becomes perceived as a series of interest groups rather than an ‘epistemic community’ with a multi-faceted but coherent agenda for beneficial societal impact of AI.

Furthermore, while there are certainly real and important areas of disagreement amongst the communities, these do not in fact neatly fall into a clear ‘near-term’ and ‘long-term’ camp. Instead, it is possible and not uncommon to hold overlapping and more nuanced positions across a range of questions and debates. These include epistemic positions on how to engage with future uncertainty around AI, and different types of evidence and argument. However, these also includes more pragmatic differences of opinion over the (in)tractability of formulating meaningful policies today which will be or remain relevant into the future. However, on critical reflection, many of these perceived disagreements are not all that strong, and need not pose a barrier to inter-community cooperation on AI policy.

However, are there in fact positive, mutually productive opportunities for both communities to work on? What would such an agreement look like? The authors propose to adapt the constitutional law principle of an ‘incompletely theorized agreement’ to ground practical policy action amongst these communities, even in the face of underlying disagreement. The key value of incompletely theorized agreements is that they allow a community to bypass or suspend theoretical disagreement on topics where (1) that disagreement appears relatively intractable given the available information, and (2) there is an urgent need to address certain shared practical issues in the meantime. Incompletely theorized agreements have been a core component to well-functioning legal systems, societies, and communities, because they allow for both stability, as well as flexibility to move forward on urgent issues. Indeed, it has been argued that this principle has underpinned landmark achievements in global governance, such as the Universal Declaration of Human Rights.

There are a range of issue areas where both ‘near-term’ and ‘long-term’ AI ethics scholars could draw on this principle to converge on questions both want addressed, or shared policy goals which they value. Without aiming to be comprehensive, potential sites for productive and shared collaboration could include; (1) research to gain insight into- and leverage on general levers of (national or international) policy formation on AI; (2) investigation into the relative efficacy of various policy levers for AI governance (e.g. codes of ethics; publication norms; auditing systems; publicly naming problematic performance); (3) establishing an appropriate scientific culture for considering the impact and dissemination of AI research; (4) policy interventions aimed at preserving the integrity of public discourse and informed decision-making in the face of AI systems; (5) exploring the question of ‘social value alignment’–how to align AI systems with the plurality of values endorsed by groups of people. For each of these projects, although the underlying reasons might be distinct, both communities would gain from these policies.

That is not to suggest that incompletely theorized agreements are an unambiguously valuable solution across all AI policy contexts. Such agreements are by their nature imperfect, and can be ‘brittle’ to changing conditions. Nonetheless, while limitations such as these should be considered in greater detail, they do not erode the case for implementing, or at least further exploring the promise of well-tailored incompletely theorized agreements for advancing responsible AI policies to support responsible and beneficial AI for both the near- and long-term.


Original paper by Charlotte Stix and Matthijs Maas: https://link.springer.com/article/10.1007/s43681-020-00037-w

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

related posts

  • Equal Improvability: A New Fairness Notion Considering the Long-term Impact

    Equal Improvability: A New Fairness Notion Considering the Long-term Impact

  • Promises and Challenges of Causality for Ethical Machine Learning

    Promises and Challenges of Causality for Ethical Machine Learning

  • The path toward equal performance in medical machine learning

    The path toward equal performance in medical machine learning

  • Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

    Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

  • The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

    The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

  • Fairness and Bias in Algorithmic Hiring

    Fairness and Bias in Algorithmic Hiring

  • Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

    Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

  • Learning to Prompt in the Classroom to Understand AI Limits: A pilot study

    Learning to Prompt in the Classroom to Understand AI Limits: A pilot study

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • Use case cards: a use case reporting framework inspired by the European AI Act

    Use case cards: a use case reporting framework inspired by the European AI Act

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.