• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Putting collective intelligence to the enforcement of the Digital Services Act

August 9, 2023

🔬 Research Summary by Dr. Suzanne Vergnolle,  an Associate Professor of Technology Law at the Cnam (Conservatoire National des Arts et Métiers) where she works at the intersection of Law, Technology, and Public Policy.

[Original paper by Dr. Suzanne Vergnolle]


Overview: Cooperation between regulators and civil society organizations is an excellent way to foster a comprehensive approach, address society concerns, and ensure effective governance in areas such as policymaking, enforcement, and protection of rights. Building upon this premise, the present report offers concrete recommendations for designing an efficient and influential expert group with the European Commission to lead operational and evidence-based enforcement of the Digital Services Act. 


Introduction

Online platforms play a significant role in today’s digital landscape as they provide services and platforms for communication, content sharing, and commerce on a massive scale. The challenge of removing the disturbing video of the terrorist attack in Christchurch remains ingrained in memory, illustrating the arduous task faced by platforms in swiftly and diligently eliminating illegal content. In response to the need for a safer and more accountable online environment, the European legislature adopted the DSA (Digital Services Act) in 2022.

This new regulation reconciles the liability exemption of intermediary services established in the former e-commerce directive with new due diligence obligations for mitigating the risks intermediary services create for society, including phenomena like hate speech, discrimination, and disinformation. The new rules have real potential to improve online services practices, but their actual impact will only be as good as their implementation and enforcement. While the enforcement system involves multiple actors, the supervision of the due diligence obligations of VLOPs (Very Large Online Platforms) in the European Union relies exclusively on the European Commission. Given that the Commission is currently getting organized to implement its new enforcement powers, it is the perfect time to reflect on how these powers can build upon collective intelligence (CI) and design collaborative mechanisms to ensure the effective enforcement of the DSA.

The aim of this report is precisely to provide key recommendations and expert advice on how to develop resourceful and fruitful collaboration mechanisms between the Commission and CSOs (Civil Society Organizations), notably by establishing an expert group. The recommendations are based on a four-step method, including research on sensibly involving stakeholders and nourished by a wide range of interviews with regulators, experts in digital policies, participatory mechanisms, and members of existing expert groups.

Key Insights

Why involve third parties in the enforcement of legal rules?

Considering involving third parties in enforcing legal rules may come as a surprise. Usually, when thinking about enforcement, one may picture a courtroom where parties present their arguments to a neutral officer in charge of hearing and deciding their case. Yet, enforcement is not limited to the resolution of a dispute. It also includes monitoring compliance and deciding whom to investigate. Two missions where third parties’ expertise can bring valuable input and save the regulator’s time. Building upon external expertise can bring evidence-based inputs and help target the most pressing issues for the parties involved, particularly by hearing the voice of people whose rights have been hurt. On a more general note, welcoming external contributions are linked to an open and participatory governance model, both central to efficiency and trust in institutions.

How to establish a fruitful setting for the involvement of third parties?

After discussing various collaboration mechanisms, including public consultations, committees, conferences, and tech-oriented events, the report focuses on a specific mechanism – an expert group – which is considered a good setting to build a lasting and trustful relationship between the Commission and involved parties. There are several advantages justifying the importance of establishing such an expert group. Unlike events that only happen irregularly, expert groups can serve as a reliable platform for continuous dialogue. Unlike public consultations open to contributions from a wide audience, expert groups bring targeted and specialized expertise. Expert groups are therefore considered a good manner to involve third parties on a long-term basis while leaving room for other mechanisms also to complement it. 

Whom to involve – or not – in the expert group?

When considering who should be involved in the expert group, the report discusses at length its composition, emphasizing the selection process as a key element to ensure its success. More specifically, a good balance of interests covered by the DSA, including topics ranging from platforms’ monitoring and governance, human rights, non-discrimination, children’s protection, and trust and safety, are considered important. Concrete recommendations on how to design the call for experts are therefore formulated. The report also details which categories of third parties have to be represented. The group should mainly comprise CSOs, independent experts, and scholars. Based on the premise that the industry is the target of the regulation and that it is already well represented in many instances, the report considers it should not benefit from permanent representation in the expert group. 

How should the expert group be administered? 

The framework established by the 2016 Commission Decision for the creation and operation of expert groups is a well-thought structure, providing adaptability for its implementation. As such, it serves as the structural basis for many recommendations. For instance, the report advocates for a mixed secretariat and a joint chairpersonship, both possible under the framework. On the logistics, the report discusses measures fostering inclusiveness, such as well-organized remote meetings and the possibility of obtaining compensation for the work performed in the group. Ensuring the capacity to be compensated, particularly for participants representing civil society organizations, was considered a critical point for many of the experts interviewed. 

Conclusion and Implications

The Digital Services Act is promising on many levels. One of its promises is to make sure very large services are as much accountable as their influence on society. To enforce this promise, the Commission must provide guidance and ensure there are sanctions in case of violations. To do so, the Commission should prioritize establishing an expert group to harness CI. While this report offers justifications and best practices for establishing an expert group with the Commission, most recommendations are not limited to this specific group. They can easily be applied to other committees or groups wanting to be built on collective intelligence and prioritize inclusiveness, participation, and efficiency as core principles.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

    Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

  • Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

    Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

  • Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

    Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

  • A Hazard Analysis Framework for Code Synthesis Large Language Models

    A Hazard Analysis Framework for Code Synthesis Large Language Models

  • Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

    Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

  • Modeling Content Creator Incentives on Algorithm-Curated Platforms

    Modeling Content Creator Incentives on Algorithm-Curated Platforms

  • South Korea as a Fourth Industrial Revolution Middle Power?

    South Korea as a Fourth Industrial Revolution Middle Power?

  • It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

    "It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

    Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.