• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Social Metaverse: Battle for Privacy

May 18, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Ben Falchuk, Shoshana Loeb, and Ralph Neff]


Overview: Our lives now extend to both the physical and digital worlds. While we have given up some of our privacy for security reasons before, the metaverse asks for an even greater sacrifice.


Introduction

The metaverse offers an exciting challenge for the privacy space. The metaverse itself consists of a monitored virtual reality (VR) space populated by interactions between avatars (virtual characters). Here, we may be aware that observation is going on, but we are not always aware of what is being monitored and to what degree. While we give up some of our privacy for security measures, the development of VR applications means our movements can be known in greater detail than before. Hence, how does this affect our privacy?

Key Insights

To begin with, privacy can come in three different modes:

  • Privacy of personal information (such as medical history).
  • Privacy of behaviour (such as purchasing choices).
  • Privacy of communications (anything related to sending/receiving messages, calls etc.)

Given the extensive monitoring within the metaverse, all three of these modes can be intruded upon. Hence, how does the physics of the metaverse permit such intrusion?

The metaverse

The metaverse engine provides the compute that dictates the physics and appearance of the space. Consequently, actions are restricted to what the engine can offer, while all that is afforded by the engine is subject to analysis. This includes the three different modes of privacy.

In terms of actors, avatars can affect the world through creating and destroying objects, but they must always play by the rules of the metaverse. In doing so, data analytics will provide the metaverse developers with valuable information as to how their application is used and insights into the users themselves. Yet, not all data analysis is restricted to purely the developers. Instead, in-game harassment through observation is a real threat.

Harassment in the metaverse

Interaction within the digital space boasts the tricky element of how everything can be recorded. Hence, constant interaction within the metaverse raises the provider’s chance to know your preferences and tendencies, allowing them to better craft nudges that draw you to producing a particular outcome. Not only that, but different avatars could also arrive at such knowledge, allowing them to impersonate other agents. Consequently, practical solutions to this possibility are required.

Mechanisms to mitigate threats

The authors list potential mechanisms to help avoid threats within the metaverse. They elaborate on a “privacy plan” (p. 55) as a package of separate actions that avatars can undertake to mitigate any privacy harms they may encounter. This centres on creating confusion amongst other agents, mystifying the personal details that could be discovered about a particular avatar. These actions are as follows:

  • A “cloud of clones” (p. 55) – deploying multiple clones of your avatar with similar habits to confuse other surrounding agents. This could include assigning movement to confound any observers further.
  • Private Copy – establishing a personal space/copy of a part of the virtual world free from observation. This can later be deleted or added to the main fabric of the metaverse.
  • Mannequin – a singular avatar copy is manifested while the actual avatar is transported elsewhere.
  • Lockout – a part of the VR world is blocked off to other users for private use.
  • Disguise – the avatar can operate in the main fabric of the metaverse, but with a different external outlook.
  • Teleport – the user can opt to be teleported to a completely different location.
  • Invisibility – the avatar can operate without being seen/monitored by others.

All of these plans are not mutually exclusive; they can be combined. For example, a user could opt for deploying a mannequin while also wanting to be teleported to another location.

However, while these methods prove an interesting and valuable approach to mitigating privacy threats, there are ways around them. For example, should other avatars have a method of locking on to a particular avatar, they will still be able to track them. Hence, what actions the metaverse permits (such as being able to latch onto a specific agent) will prove crucial in the success of these mechanisms.

Between the lines

The metaverse will bring unique opportunities and disadvantages. The ways to protect privacy will prove innovative, but these are dependent on the metaverse itself. Furthermore, I believe it will also be difficult to stop the privacy-preserving mechanisms turning into opportunities for underground operations. The opportunity to rid yourself of any monitoring and conduct your business without anyone possibly knowing, I believe, would prove too lucrative to turn down for malicious actors. Hence, we enter into another privacy balancing act between intimacy and security. While the two spheres differ in terms of physics, they still bear this dilemma in common.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • The Algorithm Audit: Scoring the Algorithms That Score Us (Research Summary)

    The Algorithm Audit: Scoring the Algorithms That Score Us (Research Summary)

  • Responsible AI In Healthcare

    Responsible AI In Healthcare

  • How to invest in Data and AI companies responsibly

    How to invest in Data and AI companies responsibly

  • Enhancing Trust in AI Through Industry Self-Governance

    Enhancing Trust in AI Through Industry Self-Governance

  • Research summary: Overcoming Barriers to Cross-Cultural Cooperation in AI Ethics and Governance

    Research summary: Overcoming Barriers to Cross-Cultural Cooperation in AI Ethics and Governance

  • Can Chatbots Replace Human Mental Health Support?

    Can Chatbots Replace Human Mental Health Support?

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

  • Epistemic fragmentation poses a threat to the governance of online targeting

    Epistemic fragmentation poses a threat to the governance of online targeting

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

  • In 2020, Nobody Knows You’re a Chatbot

    In 2020, Nobody Knows You’re a Chatbot

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.