• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

November 2, 2020

Summary contributed by our learning community member Khoa Lam (Technology Strategy Researcher, Uncharted Power)

*Link to original paper + authors at the bottom.


Overview: Closed messaging apps such as WhatsApp, Facebook Messenger, and WeChat have grown in use in recent years and can act as political means to spread information. In studying issues around election-related communications, researchers face ethical conundrums due to the encrypted privacy nature of group chats. Sehat and Kaminski present a review of four models used by researchers: voluntary contribution, focused partnerships, entrance with identification, and entrance without identification. They conclude by posing and analyzing the complexities of six ethical questions the researchers consider either implicitly or explicitly prior to collection and analysis of closed messaging research data. These questions touch upon issues of public vs. private chats, data ownership, informed consent, insight sharing, and conflict of interest.


Full summary:

The use of closed messaging apps has grown in recent years and their impact on public elections-related discussion poses challenges and ethical conundrums for researchers. Sehat and Kaminski present a review of four research practices within these apps and explore various key questions to clarify the ethical considerations faced by researchers.

The popularity of messenger apps has risen globally, where WhatsApp, Facebook Messenger, and WeChat have garnered a number of users ranging in the hundreds of millions to billions per month. These apps offer a plethora of features, including instant speed messaging, asynchronous reach, opt-in encrypted privacy, in addition to transnational texting, phone, and video service without additional costs.

It comes as no surprise that closed messaging apps can act as a powerful political tool for spreading political information. However, due to the nature of encrypted privacy embedded in their design, investigations, and studies of misinformation within these apps are often met with difficulties in professional ethics.

The authors lay out four models typically used in practice:

  1. Voluntary contribution

In the first model, researchers do not enter the chats and instead receive message texts from users with consent. This approach was implemented either as tip lines (e.g., during the 2018 Brazilian elections or the 2019 Indian elections) or via voluntary submissions in one-to-one and one-to-many broadcasts (e.g., in the 2015 Nigerian presidential elections).

  1. Collection through focused partnerships

In the second model, researchers enter directly into chat spaces themselves, as part of a collaborative election tracking project. Analysts collect messages for a period of time, and, as a result, examine both the message texts, sender details, along with their conversational contexts. This model was implemented during the 2016 Ghanaian general election, where a collective of organizations established the Social Media Tracking Center (SMTC) to monitor messages on social media for violence and election threats.

  1. Entrance with announcement or identification

Researchers who employ the third model leverage the ambiguous nature of private chat invitations and publicly available links. They entered the chats with researcher identities with or without announcement and allowed for removal or withdrawal when requested. This approach was used in some studies of the 2019 Indian elections.

  1. Entrance without identification

The fourth model, in which researchers entered the chats without disclosing their researcher identities and purposes, raises the question around the notion of a public chat group and its implications for research. This approach was used during the Brazilian 2018 and 2019 elections by various academic organizations.

To conclude, Sehat and Kaminski explore the ethical considerations the researchers implicitly or explicitly decide prior to the collection and analysis of closed messaging texts:

  1. Exactly when is a closed message chat “public”?
  2. Who does the data belong to?
  3. What are the obligations for researcher disclosure and or informed consent?
  4. When should researchers inform or report back to the groups involved the findings of their studies?
  5. Are these the questions that researchers can ask the public?
  6. Are these the questions that researchers should discuss with companies?

The authors further analyze the intricacies within these questions. Conditions including indexed invites, discussion topics, scope, group size, and expectations are considered. In addition, answers to these questions are also guided by laws, regulations, along with historical and recent court rulings. The last two questions in particular also address concerns for conflict of interest and data access while, at the same time, open up opportunities for cross-organization and interdisciplinary collaboration.


Original paper by Connie Moon Sehat, Aleksei Kaminski: https://electionstandards.cartercenter.org/wp-content/uploads/2020/09/Considerations-for-Closed-Messaging-Research-in-Democratic-Contexts-Sept-2020-1.4.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

    Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

  • The social dilemma in artificial intelligence development and why we have to solve it

    The social dilemma in artificial intelligence development and why we have to solve it

  • How Naysan Saran disrupted water quality detection in one hackathon

    How Naysan Saran disrupted water quality detection in one hackathon

  • On the Construction of Artificial Moral Agents Agents

    On the Construction of Artificial Moral Agents Agents

  • Self-Improving Diffusion Models with Synthetic Data

    Self-Improving Diffusion Models with Synthetic Data

  • How to invest in Data and AI companies responsibly

    How to invest in Data and AI companies responsibly

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • Does diversity really go well with Large Language Models?

    Does diversity really go well with Large Language Models?

  • Mapping the Ethicality of Algorithmic Pricing

    Mapping the Ethicality of Algorithmic Pricing

  • The Role of Arts in Shaping AI Ethics

    The Role of Arts in Shaping AI Ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.