• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Putting collective intelligence to the enforcement of the Digital Services Act

August 9, 2023

🔬 Research Summary by Dr. Suzanne Vergnolle,  an Associate Professor of Technology Law at the Cnam (Conservatoire National des Arts et MĂ©tiers) where she works at the intersection of Law, Technology, and Public Policy.

[Original paper by Dr. Suzanne Vergnolle]


Overview: Cooperation between regulators and civil society organizations is an excellent way to foster a comprehensive approach, address society concerns, and ensure effective governance in areas such as policymaking, enforcement, and protection of rights. Building upon this premise, the present report offers concrete recommendations for designing an efficient and influential expert group with the European Commission to lead operational and evidence-based enforcement of the Digital Services Act. 


Introduction

Online platforms play a significant role in today’s digital landscape as they provide services and platforms for communication, content sharing, and commerce on a massive scale. The challenge of removing the disturbing video of the terrorist attack in Christchurch remains ingrained in memory, illustrating the arduous task faced by platforms in swiftly and diligently eliminating illegal content. In response to the need for a safer and more accountable online environment, the European legislature adopted the DSA (Digital Services Act) in 2022.

This new regulation reconciles the liability exemption of intermediary services established in the former e-commerce directive with new due diligence obligations for mitigating the risks intermediary services create for society, including phenomena like hate speech, discrimination, and disinformation. The new rules have real potential to improve online services practices, but their actual impact will only be as good as their implementation and enforcement. While the enforcement system involves multiple actors, the supervision of the due diligence obligations of VLOPs (Very Large Online Platforms) in the European Union relies exclusively on the European Commission. Given that the Commission is currently getting organized to implement its new enforcement powers, it is the perfect time to reflect on how these powers can build upon collective intelligence (CI) and design collaborative mechanisms to ensure the effective enforcement of the DSA.

The aim of this report is precisely to provide key recommendations and expert advice on how to develop resourceful and fruitful collaboration mechanisms between the Commission and CSOs (Civil Society Organizations), notably by establishing an expert group. The recommendations are based on a four-step method, including research on sensibly involving stakeholders and nourished by a wide range of interviews with regulators, experts in digital policies, participatory mechanisms, and members of existing expert groups.

Key Insights

Why involve third parties in the enforcement of legal rules?

Considering involving third parties in enforcing legal rules may come as a surprise. Usually, when thinking about enforcement, one may picture a courtroom where parties present their arguments to a neutral officer in charge of hearing and deciding their case. Yet, enforcement is not limited to the resolution of a dispute. It also includes monitoring compliance and deciding whom to investigate. Two missions where third parties’ expertise can bring valuable input and save the regulator’s time. Building upon external expertise can bring evidence-based inputs and help target the most pressing issues for the parties involved, particularly by hearing the voice of people whose rights have been hurt. On a more general note, welcoming external contributions are linked to an open and participatory governance model, both central to efficiency and trust in institutions.

How to establish a fruitful setting for the involvement of third parties?

After discussing various collaboration mechanisms, including public consultations, committees, conferences, and tech-oriented events, the report focuses on a specific mechanism – an expert group – which is considered a good setting to build a lasting and trustful relationship between the Commission and involved parties. There are several advantages justifying the importance of establishing such an expert group. Unlike events that only happen irregularly, expert groups can serve as a reliable platform for continuous dialogue. Unlike public consultations open to contributions from a wide audience, expert groups bring targeted and specialized expertise. Expert groups are therefore considered a good manner to involve third parties on a long-term basis while leaving room for other mechanisms also to complement it. 

Whom to involve – or not – in the expert group?

When considering who should be involved in the expert group, the report discusses at length its composition, emphasizing the selection process as a key element to ensure its success. More specifically, a good balance of interests covered by the DSA, including topics ranging from platforms’ monitoring and governance, human rights, non-discrimination, children’s protection, and trust and safety, are considered important. Concrete recommendations on how to design the call for experts are therefore formulated. The report also details which categories of third parties have to be represented. The group should mainly comprise CSOs, independent experts, and scholars. Based on the premise that the industry is the target of the regulation and that it is already well represented in many instances, the report considers it should not benefit from permanent representation in the expert group. 

How should the expert group be administered? 

The framework established by the 2016 Commission Decision for the creation and operation of expert groups is a well-thought structure, providing adaptability for its implementation. As such, it serves as the structural basis for many recommendations. For instance, the report advocates for a mixed secretariat and a joint chairpersonship, both possible under the framework. On the logistics, the report discusses measures fostering inclusiveness, such as well-organized remote meetings and the possibility of obtaining compensation for the work performed in the group. Ensuring the capacity to be compensated, particularly for participants representing civil society organizations, was considered a critical point for many of the experts interviewed. 

Conclusion and Implications

The Digital Services Act is promising on many levels. One of its promises is to make sure very large services are as much accountable as their influence on society. To enforce this promise, the Commission must provide guidance and ensure there are sanctions in case of violations. To do so, the Commission should prioritize establishing an expert group to harness CI. While this report offers justifications and best practices for establishing an expert group with the Commission, most recommendations are not limited to this specific group. They can easily be applied to other committees or groups wanting to be built on collective intelligence and prioritize inclusiveness, participation, and efficiency as core principles.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

  • Analysis and Issues of Artificial Intelligence Ethics in the Process of Recruitment

    Analysis and Issues of Artificial Intelligence Ethics in the Process of Recruitment

  • Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

    Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

  • Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

    Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

  • Montreal AI Ethics Institute Hosts a TechAIDE CafĂ© Session

    Montreal AI Ethics Institute Hosts a TechAIDE Café Session

  • Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

    Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

  • Can we blame a chatbot if it goes wrong?

    Can we blame a chatbot if it goes wrong?

  • Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preli...

    Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preli...

  • Why reciprocity prohibits autonomous weapons systems in war

    Why reciprocity prohibits autonomous weapons systems in war

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.