• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power (Research Summary)

November 18, 2020

Summary contributed by our researcher Alexandrine Royer, who works at The Foundation for Genocide Education.

*Link to original paper + authors at the bottom.


Overview: Tackling the latest European regulations on online misinformation, Helberger challenges the current approach of viewing digital media platforms as informational ‘intermediaries’ and offers the concept of opinion power to demonstrate how social media platforms are becoming “governors of public opinion“.


Since the advent of social media, lawmakers have struggled to keep up with new communication technologies and the whirlpool of informational chaos they create. The risks posed by misinformation to democracies have been well-documented, and recent regulatory initiatives, according to Helberger, have failed to cause sufficient friction in the well-oiled fake news machine. Digital media platforms erode the watchdog function of denouncing political power excess ascribed to traditional media. For Helberger, current regulatory frameworks only seem to add fuel to the fire by failing to view social media platforms as “political in their own right.” Rather than focusing on the users, governments should treat digital media platforms as political entities capable of exercising opinion power over a global online audience.

For Helberger, much of the current tactics aim at reining in the power of big tech are grounded in the realm of antitrust and competition law. Facebook, Alphabet (i.e. Google), and Twitter have annual revenues standing respectively at $70.7 billion, $161.9 billion and $3.46 billion, which go well beyond the yearly GDP of small nation-states. The global economic force of Big Tech is manifested into efforts to prevent any new forms of legislation that might disengage its users and/or hinder their growth. This year, in its Open Letter to Australians, Google actively encouraged its users to speak out against a proposed Australian law that would require search engines such as Google to pay Australian media companies for using their stories on their site. Google even went so far as to insert a yellow hazard warning below the main page search bar, with the accompanying message “the way Aussies search every day on Google is at risk from new government regulation.”

Digital media platforms have not shied away from exercising their political muscle. Yet, we tend to treat these interventions in the political space as examples of lobbying corporate entities rather than governmental or political actors showcasing their right to population control. Helberger encourages us to view digital platforms as governments, who have their pool of citizens and own legal corpus (i.e. the Terms of Use, Privacy Policies and community guidelines), and as wielders of opinion power. Helberger draws on this concept of opinion power from the German Federal Constitutional Court (i.e. a translation of Meinungsmacht). It is defined as “the ability of the media to influence individual processes and public opinion formation.” Germany uses this legal notion of opinion power to argue that any imbalances in public opinion formation, whereby one discourse predominates over another, poses a severe threat to the pluralistic media landscape and place democracy- and human lives- in peril.

Germany’s 20th-century history is a sober reminder of the destructive power of widespread propaganda. Yet, the same patterns of disseminating hate speech, monopolizing public discourse against identifiable minorities and curtailing citizen’s access to verifiable information were used via Facebook during the genocide against the Rohingya in Myanmar. Facebook did not only mediate users’ opinions and serve as a digital infrastructure that influenced the political processes; its very existence enabled the movement to gain traction, and its algorithms sped up and shaped the unfolding of the violence. For Helberger, social media platforms ought to be treated as political actors with their power of opinion – they are political forces in and of themselves.

Germany, France, and the UK have all introduced legislation requiring greater accountability and oversight from big tech in monitoring the content on their platforms through considerable fines and penalties. By doing so, these countries legitimize digital media platforms’ rights to act as governors of public opinion, allowing them to become “the new self-government of the online global population,” hence increasing their power to dictate and determine how civic discourse takes shape. According to Helberger, the European Commission’s Shaping Europe’s Digital Future strategy is insufficient in dealing with opinion power. Such proposals fail to cover Facebook and YouTube’s capacity to commission content and conclude deals with rights holders. One potential instrument to curtail these platforms’ opinion power is through media concentration law, but it will require adjusting the traditional measurements of audience reach and ownership limitations.

Helberger concludes by asserting that “dispersing concentrations of opinion power and creating countervailing powers is essential to preventing certain social media platforms from becoming quasi-governments of online speech, while also ensuring that they each remain one of many platforms that allow us to engage in public debate”. Indeed, the concept of opinion power and its associated legal workings are a welcome and refreshing addition to the public debate on how to regulate misinformation. It also asks lawmakers to critically assess whether their policy efforts are simply strengthening and cementing the political powers of Big Tech.


Original paper by Natali Helberger: https://doi.org/10.1080/21670811.2020.1773888

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • Research summary: The Toxic Potential of YouTube's Feedback Loop

    Research summary: The Toxic Potential of YouTube's Feedback Loop

  • Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

    Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

  • Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for G...

    Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for G...

  • The Whiteness of AI (Research Summary)

    The Whiteness of AI (Research Summary)

  • Unpacking Human-AI interaction (HAII) in safety-critical industries

    Unpacking Human-AI interaction (HAII) in safety-critical industries

  • Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

    Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

  • Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

    Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.