• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

September 27, 2023

šŸ”¬ Research Summary by Hubert Etienne, a researcher in AI ethics, the former Global Generative AI Ethics Lead at Meta and the inventor of Computational philosophy.

[Original paper by Hubert Etienne and Onur Ƈelebi]


Overview: This paper sheds a whole new light on online misinformation, analyzing content reported by Facebook and Instagram users in France, the UK, and the US. It combines mixed methods to better understand what users report as ā€˜false news’ and why. It presents several major findings with great implications for regulators to compare the state of online misinformation in different countries and for social media’s integrity teams to improve misinformation algorithmic detection tools.


Introduction

While online misinformation is a crucial issue for societies to address the growing distrust citizens express against their institutions, current research falls under three limitations: First, it is largely based on US data, providing a biased view of the phenomenon, as if misinformation were the same in every country. Second, it rarely analyses misinformation content itself (social media posts) but databases annotated by fact-checkers. Third, it approaches misinformation only from the perspective of fact-checkers with little attention to how social media users perceive it.

This paper suggests an original approach to fill these gaps: leveraging mixed methods to examine misinformation from the reporters’ perspective, both at the content level and comparatively across regions and platforms.

The authors present the first research typology to classify user reports, i.e. social media posts reported by users as ā€˜false news’, and show that misinformation varies in volume, type, and manipulative technique across countries and social media. They identify six manipulative techniques used to convey misinformation and four profiles of ā€˜reporters’, as social media users reporting online content to platform moderators for distinct purposes. This allows the author to explain 55% of inaccuracy in misinformation reporting, suggest ways to reduce it and present an algorithmic model capable of classifying user reports to improve misinformation detection tools.

Key Insights

  1. A reporter-oriented approach to studying misinformation at the content level 

The paper presents the first research methodology to classify user reports, resulting from the human review of a dataset of 8,975 content items reported on Facebook and Instagram in France, the UK, and the US in June 2020.

  1. Country and platform specificities of content reported as misinformation 

The analysis of the dataset reveals three main findings:

  • Six manipulation techniques to frame and disseminate misinformation online are observed, including a novel one which seems to have emerged on Instagram in the US: ā€˜the excuse of casualness’. This new manipulation technique presents significantly more challenges for algorithmic detection and human moderation, therefore confirming the importance of better filtering of user reports to improve the accuracy of misinformation detection models. 
  • Meaningful distinctions are observed, suggesting that the volume of online misinformation content greatly varies between countries (France, the UK, the US) and platforms (Facebook, Instagram), together with the manipulative technique employed to disseminate it. For example, several signals suggest that the circulation of misinformation was significantly smaller in both volume and severity in France than in the US, with the UK occupying an intermediate position.
  • A possible convergence in misinformation content between platforms is revealed by a body of corroborating evidence. The great uniformity between content reported by Facebook and Instagram users in the US contrasts with the dissimilarity of content reported by Facebook and Instagram users in France. This suggests a convergence between content on the two platforms and confirms that misinformation is not only a Facebook problem anymore, but also a serious issue for Instagram.
  1. A plurality of reporters’ profiles allows various leverages for noise reduction 

The authors identify four reporter profiles and associate these with distinct reasons for social media users to report content to moderators:

  1. Reporting false news to trigger moderators’ actions
  2. Reporting to annoy the content creator
  3. Reporting inappropriate content that is not misinformation 
  4. Reporting to draw the moderators’ attention to a problematic situation 

This allows them to break down the inaccuracy in misinformation reporting into four categories of ā€˜noise’ and suggest specific means of action to reduce these. While only 7% of content reported by users in the dataset was labeled as ā€˜false news’ by fact-checkers, suggesting 93% of inaccuracy in user reporting, the authors show that 55% of such inaccuracy should probably not be attributed to the reporters’ lack of seriousness, as it was believed until now, but rather to confusion surrounding the reporting features and moderating rules. 

  1. Leveraging multi-signal reporting to improve misinformation detection 

Finally, the authors demonstrate the performance of a gradient-boosting classification model trained on a combination of user reports when identifying the different types of noise. The aim of this model is not to detect the type of content that fact-checkers would rate false news, but rather to identify the most relevant reports to be sent to fact-checkers and to redirect misreported items to more suitable moderators. 

In contrast with highly sophisticated detection models trained on massive datasets and leveraging many input signals, the value of this model is its capacity to achieve a promising performance despite its extremely simple architecture, its training on a very small dataset, and its processing of only 10 basic input human signals associated with the categories users employ to report a content piece. While state-of-the-art misinformation classifiers leverage multimodal architectures to grasp the meaning of a post in relation to its semantic and pictorial components, this model is blind to the content itself and ignores personal information about both the content creator and the reporter, as well as contextual information about the content’s sharing and reporting.

Between the lines

By studying misinformation from the reporters’ perspective and at the content level, the authors’ goal was to refute the idea that user reports are a low-accuracy signal poorly suitable for online misinformation detection. They demonstrate that user reporting is a complex signal composed of different types of feedback that should be understood and assessed separately. When approached as such, reporting offers a valuable signal not only for enhancing the performance of false news detection algorithms but also for identifying emerging trends in misinformation practices.

This approach paves the way for more participative moderation frameworks that reward the best reporters by prioritizing their feedback over those of adversarial actors. The meaningful variations in the volume, type, topic, and manipulation technique of misinformation observed between countries and platforms also support the claim that misinformation is anything but a globally uniform phenomenon. Instead of generalizing the findings of US-centered studies, researchers, industry players, and policymakers should examine misinformation with respect to the specificities of each country and platform. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Who will share Fake-News on Twitter? Psycholinguistic cues in online post histories discriminate bet...

    Who will share Fake-News on Twitter? Psycholinguistic cues in online post histories discriminate bet...

  • Representation Engineering: A Top-Down Approach to AI Transparency

    Representation Engineering: A Top-Down Approach to AI Transparency

  • Research summary: Troubling Trends in Machine Learning Scholarship

    Research summary: Troubling Trends in Machine Learning Scholarship

  • Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

    Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • The Bias of Harmful Label Associations in Vision-Language Models

    The Bias of Harmful Label Associations in Vision-Language Models

  • ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

    ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

  • Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutio...

    Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutio...

  • It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

    It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

  • AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

    AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.