• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

November 2, 2020

Summary contributed by our learning community member Khoa Lam (Technology Strategy Researcher, Uncharted Power)

*Link to original paper + authors at the bottom.


Overview: Closed messaging apps such as WhatsApp, Facebook Messenger, and WeChat have grown in use in recent years and can act as political means to spread information. In studying issues around election-related communications, researchers face ethical conundrums due to the encrypted privacy nature of group chats. Sehat and Kaminski present a review of four models used by researchers: voluntary contribution, focused partnerships, entrance with identification, and entrance without identification. They conclude by posing and analyzing the complexities of six ethical questions the researchers consider either implicitly or explicitly prior to collection and analysis of closed messaging research data. These questions touch upon issues of public vs. private chats, data ownership, informed consent, insight sharing, and conflict of interest.


Full summary:

The use of closed messaging apps has grown in recent years and their impact on public elections-related discussion poses challenges and ethical conundrums for researchers. Sehat and Kaminski present a review of four research practices within these apps and explore various key questions to clarify the ethical considerations faced by researchers.

The popularity of messenger apps has risen globally, where WhatsApp, Facebook Messenger, and WeChat have garnered a number of users ranging in the hundreds of millions to billions per month. These apps offer a plethora of features, including instant speed messaging, asynchronous reach, opt-in encrypted privacy, in addition to transnational texting, phone, and video service without additional costs.

It comes as no surprise that closed messaging apps can act as a powerful political tool for spreading political information. However, due to the nature of encrypted privacy embedded in their design, investigations, and studies of misinformation within these apps are often met with difficulties in professional ethics.

The authors lay out four models typically used in practice:

  1. Voluntary contribution

In the first model, researchers do not enter the chats and instead receive message texts from users with consent. This approach was implemented either as tip lines (e.g., during the 2018 Brazilian elections or the 2019 Indian elections) or via voluntary submissions in one-to-one and one-to-many broadcasts (e.g., in the 2015 Nigerian presidential elections).

  1. Collection through focused partnerships

In the second model, researchers enter directly into chat spaces themselves, as part of a collaborative election tracking project. Analysts collect messages for a period of time, and, as a result, examine both the message texts, sender details, along with their conversational contexts. This model was implemented during the 2016 Ghanaian general election, where a collective of organizations established the Social Media Tracking Center (SMTC) to monitor messages on social media for violence and election threats.

  1. Entrance with announcement or identification

Researchers who employ the third model leverage the ambiguous nature of private chat invitations and publicly available links. They entered the chats with researcher identities with or without announcement and allowed for removal or withdrawal when requested. This approach was used in some studies of the 2019 Indian elections.

  1. Entrance without identification

The fourth model, in which researchers entered the chats without disclosing their researcher identities and purposes, raises the question around the notion of a public chat group and its implications for research. This approach was used during the Brazilian 2018 and 2019 elections by various academic organizations.

To conclude, Sehat and Kaminski explore the ethical considerations the researchers implicitly or explicitly decide prior to the collection and analysis of closed messaging texts:

  1. Exactly when is a closed message chat “public”?
  2. Who does the data belong to?
  3. What are the obligations for researcher disclosure and or informed consent?
  4. When should researchers inform or report back to the groups involved the findings of their studies?
  5. Are these the questions that researchers can ask the public?
  6. Are these the questions that researchers should discuss with companies?

The authors further analyze the intricacies within these questions. Conditions including indexed invites, discussion topics, scope, group size, and expectations are considered. In addition, answers to these questions are also guided by laws, regulations, along with historical and recent court rulings. The last two questions in particular also address concerns for conflict of interest and data access while, at the same time, open up opportunities for cross-organization and interdisciplinary collaboration.


Original paper by Connie Moon Sehat, Aleksei Kaminski: https://electionstandards.cartercenter.org/wp-content/uploads/2020/09/Considerations-for-Closed-Messaging-Research-in-Democratic-Contexts-Sept-2020-1.4.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

    Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

  • Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

    Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

  • AI Has Arrived in Healthcare, but What Does This Mean?

    AI Has Arrived in Healthcare, but What Does This Mean?

  • Technical methods for regulatory inspection of algorithmic systems in social media platforms

    Technical methods for regulatory inspection of algorithmic systems in social media platforms

  • The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summa...

    The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summa...

  • Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

    Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

  • AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

    AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

  • Open-source provisions for large models in the AI Act

    Open-source provisions for large models in the AI Act

  • Rise of the machines: Prof Stuart Russell on the promises and perils of AI

    Rise of the machines: Prof Stuart Russell on the promises and perils of AI

  • Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

    Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.