• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

June 29, 2020

Summary contributed by Brooke Criswell (@Brooke_Criswell). She’s pursuing a PhD. in media psychology, and has extensive experience in marketing & communications.

*Reference at the bottom


Facebook may own WhatsApp, but it is different from that of typical social media sites such as Facebook and Twitter. WhatsApp has end-to-end encryption that has made this app unique in communication with others. WhatsApp has over 1.5 billion users and has become a source for sharing news in countries like Brazil and India, where smartphone’s use for news access is higher than other devices (Reis et al., 2020).  This research study focuses on Brazil and India’s two countries and how misinformation has affected the democratic discussion in these countries. There are over 55 billion messages sent a day, with about 4.5 billion messages are images (Reis et al., 2020).  Due to the nature of encryption, there is no way that WhatsApp monitors or flags inappropriate or potentially dangerous or fake images as Facebook has the capability of doing. The researchers propose an approach with machine learning, where WhatsApp can automatically detect when a user shares images and videos that have previously been labeled as misinformation with the Facebook database. This would abide by the E2EE and not compromise the encryption or privacy of the user (Reis et al., 2020). 

Facebook already has a lot of partnerships with fact-checking agencies around the world, and so the database would not be difficult to obtain. Algorithms would be implemented for hashing and matching similar media content. “A hashing algorithm provides a signature to represent an image or video” (Reis et al., 2020).  The researchers were focused on two types of hash functions for this proposal. The first being cryptographic has and the second being perceptual has. A cryptographic has is a one way has function based on techniques like MD5 or SHA and processes a string has given an image. It would be used to identify exact matches only, whereas the perceptual hash could identify similar images and be notified even if the image was altered (Reis et al., 2020). 

There are already multiple algorithms, including Facebook PDQ hashing, that allows this to be done.

Another part of this model would be once a user intends to send an image, WhatsApp checks whether it is already in the hashed set. If so, the warning confirmation asks if the user wants to share this information (Reis et al., 2020).  When the recipient user gets the message, WhatsApp decrypts the image on the phone, obtains a perceptual hash, and the content is then flagged if it is in the already checked database (Reis et al., 2020).  The warning message would also include where the item was already fact-checked.

This new method could also be a benefit for Facebook as they could collect data on how many times a match occurred and establish the prevalence and virality of different types of misinformation and collect information about the users who repeatedly send such content (Reis et al., 2020). 

With this idea in mind, the researchers went ahead and tested it in Brazil and India. They had 17,465 users in Brazil, with 34,109 images and 63,500 users in India with 810,000 images. The dataset they used was publicly available.

In the study, the fact-checked images by crawling all images from popular fact-checking websites from Brazil and India. Then, they obtained the date in which they were fact-checked. Next, they used Google reverse image search to check whether one of the main fact-checking domains were returned. If the image passed their test, it was added to the last collection, which has over 100,000 facts checked pictures from Brazil and about 20,000 from India (Reis et al., 2020). 

Next, they used the PDQ hashing to implement their algorithm of clustering similar or identical images together.

In their findings, the results showed that 40.7 percent of the misinformation images in Brazil and 82.2 percent of the misinformation image shares in India could have been avoided by flagging the image and preventing it from being forwarded after being fact-checked (Reis et al., 2020). 

This study shows just how important it is for technology companies to inform their users of the information they are sending and make an educated decision on what information they want to spread to others.


Reis, J. C. S., Melo, P., Garimella, K., & Benecenuto, F. (2020). Detecting Misinformation on WhatsApp without Breaking Encryption. https://arxiv.org/abs/2006.02471.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Anthropomorphism and the Social Robot

    Anthropomorphism and the Social Robot

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

    Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

  • CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

    CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

  • GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

    GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

  • Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learn...

    Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learn...

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

    AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

  • Public Strategies for Artificial Intelligence: Which Value Drivers?

    Public Strategies for Artificial Intelligence: Which Value Drivers?

  • The Abuse and Misogynoir Playbook, explained

    The Abuse and Misogynoir Playbook, explained

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.