• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

February 14, 2024

🔬 Research Summary by Manuel Wörsdörfer, an Assistant Professor of Management and Computing Ethics at the Maine Business School and School of Computing and Information Science at the University of Maine.

[Original paper by Manuel Wörsdörfer]


Overview: The article argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct and thus lacks adequate governance mechanisms. To address these issues, the paper identifies several ordoliberal-inspired AI ethics principles that could serve as the foundation for a digital bill of rights. It also shows how those principles could be implemented with the help of an ordoliberal regulatory framework and competition policy.


Introduction

Dozens of AI ethics initiatives and governance documents have emerged over the past few years, starting with the U.S. National Science and Technology Council’s ‘Preparing for the Future of AI’ and the E.U. Digital Charter in 2016. The latest examples include the E.U.’s proposed AI Act, the Biden-Harris Administration’s ‘Blueprint for an AI Bill of Rights,’ and the White House’s ‘Ensuring Safe, Secure, and Trustworthy AI Principles.’ 

While AI ethics (initiatives) play an essential role in motivating morally acceptable professional behavior and prescribing fundamental duties and responsibilities of computer engineers and can, therefore, bring about fairer, safer, and more trustworthy AI applications, they also come with various shortcomings: One of the main concerns is that the proposed AI guiding principles are often too abstract, vague, flexible, or confusing and that they lack proper implementation guidance. Consequently, there is often a gap between theory and practice, resulting in a lack of practical operationalization by the AI industry. 

Critics also point out the potential trade-off between ethical principles and corporate interests and the possible use of those initiatives for ethics-washing or window-dressing purposes. Furthermore, most AI ethics guidelines are soft-law documents that lack adequate governance mechanisms and do not have the force of binding law, further exacerbating white- or green-washing concerns. Lastly, there is also possible regulatory or policy arbitrage, so-called jurisdiction, or ‘ethics shopping’ to countries with laxer standards and fewer constraints, e.g., offshoring to countries with less stringent requirements for AI systems.

Key Insights 

Ordoliberal AI Ethics Principles

To address the above issues, the paper identifies nine ordoliberal-inspired AI ethics principles that could serve as the foundation for a digital bill of rights. It also shows how those principles could be implemented with the help of an ordoliberal regulatory framework and competition policy.

Respect for Human Rights

The ordoliberal (Kantian) program of liberty and human rights requires a human-centered approach to AI and preserving human agency, control, oversight, and responsibility in the digital economy. Human control of AI technologies requires, among others, a review of automated decisions and the ability to opt out of computerized decisions. It also implies evaluating the societal impacts of AI systems and their effects on human agency and the promotion of human values, including well-being and flourishing, access to technology, and leveraging technology for the benefit of society.

Data Protection and Right to Privacy

Privacy as a human right implies a significant limitation of arbitrary mass surveillance and spying, informational self-determination and sovereignty, control over data use and the ability to restrict the processing of data, the right to rectification, correct, and erasure, privacy by design and default, data security, and effective data protection laws.

Harm Prevention and Beneficence

Harm prevention relates to safety and security. Key criteria in this regard are the technological robustness of AI systems, the prevention of the malicious use of AI technologies, the reliability and reproducibility of AI research methods and applications, the availability of fallback plans and safe exits, and the consideration of unknown risks.

Non-Discrimination and Freedom of Privileges

The ordoliberal principles of non-discrimination and freedom of privileges relate to avoiding discrimination, manipulation, and negative profiling and preventing or minimizing algorithmic biases. This requires representative and high-quality data, fairness, equality, and inclusiveness in impact and design. Special attention must be paid to vulnerable and marginalized groups, e.g., children, immigrants, and ethnic minorities, and the related problems of possible exclusion and inequality. 

Fairness and Justice

Four types of AI-related fairness need to be distinguished – data, design, outcome, and implementation fairness. Data fairness requires mitigating biases, excluding discriminatory influences, and not generating discriminatory or inequitable impacts on affected individuals and communities. Design fairness requires that AI systems have model architectures that do not include target variables, features, processes, or analytical structures that are unreasonable, morally objectionable, or unjustifiable. Outcome fairness requires that AI systems do not have discriminatory or inequitable impacts on the lives of the people they affect. Implementation fairness requires that AI systems must be deployed by users sufficiently trained to implement them responsibly and without bias.

Transparency and Explainability of AI Systems

The ordoliberal criterion of transparency demands explainability and open communication, open-source data and algorithms, open government procurement, the right to information, notification when an AI system decides an individual, notification when humans interact with AI technologies, and regular reporting.

Accountability and Responsibility

Accountability refers to verifiability, replicability, evaluation and assessment requirements, the creation of an oversight body, the ability to appeal, remedy for automated decisions, the principle of liability and legal responsibility, and the adoption of new regulations. Furthermore, algorithmic accountability relates to the public perception of AI business practices and internal and external monitoring of AI business practices. 

Democracy and the Rule of Law

An ordoliberal private law society requires embedding AI systems in democratic and rule-of-law societies, with adequate parliamentary and judicial oversight, similar to the concept of ‘deliberative order ethics.’ The idea is based on a contractual theory of business ethics that rests on participation and deliberation, i.e., inclusive, equal, diverse stakeholder dialogue and engagement processes and the so-called community-in-the-loop approach. 

Environmental and Social Sustainability

Environmental sustainability relates to the ecological impacts and carbon footprint of AI technologies, such as the significant energy consumption and corresponding greenhouse gas emissions of data centers or the problem of electronic waste. Social sustainability requires AI developers to proceed with continuous sensitivity to real-world impacts. Human rights due diligence and stakeholder impact assessment are crucial, i.e., assessing the possible effects on individuals, society, and interpersonal relationships. 

An Ordoliberal Framework Policy for AI

From an ordoliberal perspective, implementing the above principles rests on two pillars – regulatory and competition policy. The E.U.’s Artificial Intelligence Act (AIA) is a crucial step in the right direction to realize the first pillar of an ordoliberal framework for AI ethics. Yet, to bring the AIA closer to an alignment with the ordoliberal ideal, several reform measures need to be taken, such as introducing or strengthening …

  • Conformity assessment procedures (CAPs),
  • Democratic accountability and judicial oversight,
  • Redress and complaint mechanisms,
  • Worker protection,
  • Governance structure,
  • Funding and staffing of market surveillance authorities, and
  • Sustainability considerations.

AI legislation must also be accompanied by an adequate competition policy to address the power asymmetries in the digital economy. As we have shown elsewhere, the current antitrust regimes of the E.U. and especially the U.S. are flawed. They cannot fully realize a competitive economy, open up markets, correct market power, limit lobbyism and rent-seeking, adequately review and block M&As, and implement behavioral and structural remedies. The E.U. has shown promising potential, especially with its recent antitrust probes and policy proposals. Yet, to fully realize the above ordoliberal criteria, the E.U.’s antitrust regime needs to be further strengthened, including hardening the Digital Markets Act (DMA). This could be achieved with the help of the following ordoliberal-inspired reform proposals: 

  • Updating antitrust laws and making them fit for the digital economy,
  • Shifting the burden of proof from competition agencies to the merging parties,
  • Establishing an anti-merger presumption,
  • Revising existing merger guidelines and introducing ex-post-merger control,
  • Making more frequent use of behavioral and structural remedies,
  • Ensuring platform neutrality similar to net neutrality,
  • Better funding and staffing of antitrust agencies,
  • Increasing monetary penalties for anti-competitive business practices,
  • Better protection of whistle-blowers, and
  • Enhanced international cooperation.

Between the lines

Our research has, so far, focused on the macro level (i.e., nation-states and governments). In the next step of our research project, we broaden our approach and incorporate the micro (i.e., data scientists and AI researchers) and macro (i.e., corporations and organizations) levels. We specifically aim to provide concrete guidance for AI developers, operators, and policy advisors – going above and beyond the general overview of reform measures presented in this paper – in the hope that some of the suggested policy proposals will find their way into upcoming AI ethics and antitrust legislations, professional and corporate codes of conduct, and business practices around the globe.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • An Empirical Study of Modular Bias Mitigators and Ensembles

    An Empirical Study of Modular Bias Mitigators and Ensembles

  • AI agents for facilitating social interactions and wellbeing

    AI agents for facilitating social interactions and wellbeing

  • Epistemic fragmentation poses a threat to the governance of online targeting

    Epistemic fragmentation poses a threat to the governance of online targeting

  • On Human-AI Collaboration in Artistic Performance

    On Human-AI Collaboration in Artistic Performance

  • Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

    Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

  • AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

    AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

  • Quantifying the Carbon Emissions of Machine Learning

    Quantifying the Carbon Emissions of Machine Learning

  • Risky Analysis: Assessing and Improving AI Governance Tools

    Risky Analysis: Assessing and Improving AI Governance Tools

  • Why AI ethics is a critical theory

    Why AI ethics is a critical theory

  • Brave: what it means to be an AI Ethicist

    Brave: what it means to be an AI Ethicist

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.