• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Social Metaverse: Battle for Privacy

May 18, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Ben Falchuk, Shoshana Loeb, and Ralph Neff]


Overview: Our lives now extend to both the physical and digital worlds. While we have given up some of our privacy for security reasons before, the metaverse asks for an even greater sacrifice.


Introduction

The metaverse offers an exciting challenge for the privacy space. The metaverse itself consists of a monitored virtual reality (VR) space populated by interactions between avatars (virtual characters). Here, we may be aware that observation is going on, but we are not always aware of what is being monitored and to what degree. While we give up some of our privacy for security measures, the development of VR applications means our movements can be known in greater detail than before. Hence, how does this affect our privacy?

Key Insights

To begin with, privacy can come in three different modes:

  • Privacy of personal information (such as medical history).
  • Privacy of behaviour (such as purchasing choices).
  • Privacy of communications (anything related to sending/receiving messages, calls etc.)

Given the extensive monitoring within the metaverse, all three of these modes can be intruded upon. Hence, how does the physics of the metaverse permit such intrusion?

The metaverse

The metaverse engine provides the compute that dictates the physics and appearance of the space. Consequently, actions are restricted to what the engine can offer, while all that is afforded by the engine is subject to analysis. This includes the three different modes of privacy.

In terms of actors, avatars can affect the world through creating and destroying objects, but they must always play by the rules of the metaverse. In doing so, data analytics will provide the metaverse developers with valuable information as to how their application is used and insights into the users themselves. Yet, not all data analysis is restricted to purely the developers. Instead, in-game harassment through observation is a real threat.

Harassment in the metaverse

Interaction within the digital space boasts the tricky element of how everything can be recorded. Hence, constant interaction within the metaverse raises the provider’s chance to know your preferences and tendencies, allowing them to better craft nudges that draw you to producing a particular outcome. Not only that, but different avatars could also arrive at such knowledge, allowing them to impersonate other agents. Consequently, practical solutions to this possibility are required.

Mechanisms to mitigate threats

The authors list potential mechanisms to help avoid threats within the metaverse. They elaborate on a “privacy plan” (p. 55) as a package of separate actions that avatars can undertake to mitigate any privacy harms they may encounter. This centres on creating confusion amongst other agents, mystifying the personal details that could be discovered about a particular avatar. These actions are as follows:

  • A “cloud of clones” (p. 55) – deploying multiple clones of your avatar with similar habits to confuse other surrounding agents. This could include assigning movement to confound any observers further.
  • Private Copy – establishing a personal space/copy of a part of the virtual world free from observation. This can later be deleted or added to the main fabric of the metaverse.
  • Mannequin – a singular avatar copy is manifested while the actual avatar is transported elsewhere.
  • Lockout – a part of the VR world is blocked off to other users for private use.
  • Disguise – the avatar can operate in the main fabric of the metaverse, but with a different external outlook.
  • Teleport – the user can opt to be teleported to a completely different location.
  • Invisibility – the avatar can operate without being seen/monitored by others.

All of these plans are not mutually exclusive; they can be combined. For example, a user could opt for deploying a mannequin while also wanting to be teleported to another location.

However, while these methods prove an interesting and valuable approach to mitigating privacy threats, there are ways around them. For example, should other avatars have a method of locking on to a particular avatar, they will still be able to track them. Hence, what actions the metaverse permits (such as being able to latch onto a specific agent) will prove crucial in the success of these mechanisms.

Between the lines

The metaverse will bring unique opportunities and disadvantages. The ways to protect privacy will prove innovative, but these are dependent on the metaverse itself. Furthermore, I believe it will also be difficult to stop the privacy-preserving mechanisms turning into opportunities for underground operations. The opportunity to rid yourself of any monitoring and conduct your business without anyone possibly knowing, I believe, would prove too lucrative to turn down for malicious actors. Hence, we enter into another privacy balancing act between intimacy and security. While the two spheres differ in terms of physics, they still bear this dilemma in common.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Toward Responsible AI Use: Considerations for Sustainability Impact Assessment

    Toward Responsible AI Use: Considerations for Sustainability Impact Assessment

  • Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

    Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

  • UNESCO’s Recommendation on the Ethics of AI

    UNESCO’s Recommendation on the Ethics of AI

  • AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

    AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

  • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

  • Aging in an Era of Fake News (Research Summary)

    Aging in an Era of Fake News (Research Summary)

  • A Machine Learning Challenge or a Computer Security Problem?

    A Machine Learning Challenge or a Computer Security Problem?

  • How the TAII Framework Could Influence the Amazon's Astro Home Robot Development

    How the TAII Framework Could Influence the Amazon's Astro Home Robot Development

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.