• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power (Research Summary)

November 18, 2020

Summary contributed by our researcher Alexandrine Royer, who works at The Foundation for Genocide Education.

*Link to original paper + authors at the bottom.


Overview: Tackling the latest European regulations on online misinformation, Helberger challenges the current approach of viewing digital media platforms as informational ‘intermediaries’ and offers the concept of opinion power to demonstrate how social media platforms are becoming “governors of public opinion“.


Since the advent of social media, lawmakers have struggled to keep up with new communication technologies and the whirlpool of informational chaos they create. The risks posed by misinformation to democracies have been well-documented, and recent regulatory initiatives, according to Helberger, have failed to cause sufficient friction in the well-oiled fake news machine. Digital media platforms erode the watchdog function of denouncing political power excess ascribed to traditional media. For Helberger, current regulatory frameworks only seem to add fuel to the fire by failing to view social media platforms as “political in their own right.” Rather than focusing on the users, governments should treat digital media platforms as political entities capable of exercising opinion power over a global online audience.

For Helberger, much of the current tactics aim at reining in the power of big tech are grounded in the realm of antitrust and competition law. Facebook, Alphabet (i.e. Google), and Twitter have annual revenues standing respectively at $70.7 billion, $161.9 billion and $3.46 billion, which go well beyond the yearly GDP of small nation-states. The global economic force of Big Tech is manifested into efforts to prevent any new forms of legislation that might disengage its users and/or hinder their growth. This year, in its Open Letter to Australians, Google actively encouraged its users to speak out against a proposed Australian law that would require search engines such as Google to pay Australian media companies for using their stories on their site. Google even went so far as to insert a yellow hazard warning below the main page search bar, with the accompanying message “the way Aussies search every day on Google is at risk from new government regulation.”

Digital media platforms have not shied away from exercising their political muscle. Yet, we tend to treat these interventions in the political space as examples of lobbying corporate entities rather than governmental or political actors showcasing their right to population control. Helberger encourages us to view digital platforms as governments, who have their pool of citizens and own legal corpus (i.e. the Terms of Use, Privacy Policies and community guidelines), and as wielders of opinion power. Helberger draws on this concept of opinion power from the German Federal Constitutional Court (i.e. a translation of Meinungsmacht). It is defined as “the ability of the media to influence individual processes and public opinion formation.” Germany uses this legal notion of opinion power to argue that any imbalances in public opinion formation, whereby one discourse predominates over another, poses a severe threat to the pluralistic media landscape and place democracy- and human lives- in peril.

Germany’s 20th-century history is a sober reminder of the destructive power of widespread propaganda. Yet, the same patterns of disseminating hate speech, monopolizing public discourse against identifiable minorities and curtailing citizen’s access to verifiable information were used via Facebook during the genocide against the Rohingya in Myanmar. Facebook did not only mediate users’ opinions and serve as a digital infrastructure that influenced the political processes; its very existence enabled the movement to gain traction, and its algorithms sped up and shaped the unfolding of the violence. For Helberger, social media platforms ought to be treated as political actors with their power of opinion – they are political forces in and of themselves.

Germany, France, and the UK have all introduced legislation requiring greater accountability and oversight from big tech in monitoring the content on their platforms through considerable fines and penalties. By doing so, these countries legitimize digital media platforms’ rights to act as governors of public opinion, allowing them to become “the new self-government of the online global population,” hence increasing their power to dictate and determine how civic discourse takes shape. According to Helberger, the European Commission’s Shaping Europe’s Digital Future strategy is insufficient in dealing with opinion power. Such proposals fail to cover Facebook and YouTube’s capacity to commission content and conclude deals with rights holders. One potential instrument to curtail these platforms’ opinion power is through media concentration law, but it will require adjusting the traditional measurements of audience reach and ownership limitations.

Helberger concludes by asserting that “dispersing concentrations of opinion power and creating countervailing powers is essential to preventing certain social media platforms from becoming quasi-governments of online speech, while also ensuring that they each remain one of many platforms that allow us to engage in public debate”. Indeed, the concept of opinion power and its associated legal workings are a welcome and refreshing addition to the public debate on how to regulate misinformation. It also asks lawmakers to critically assess whether their policy efforts are simply strengthening and cementing the political powers of Big Tech.


Original paper by Natali Helberger: https://doi.org/10.1080/21670811.2020.1773888

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Understanding technology-induced value change: a pragmatist proposal

    Understanding technology-induced value change: a pragmatist proposal

  • Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

    Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

  • On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

    On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

  • Putting AI ethics to work: are the tools fit for purpose?

    Putting AI ethics to work: are the tools fit for purpose?

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

  • Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

    Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

  • Subreddit Links Drive Community Creation and User Engagement on Reddit

    Subreddit Links Drive Community Creation and User Engagement on Reddit

  • AI Consent Futures: A Case Study on Voice Data Collection with Clinicians

    AI Consent Futures: A Case Study on Voice Data Collection with Clinicians

  • Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

    Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

  • Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

    Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.