• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Post-Mortem Privacy 2.0: Theory, Law and Technology

March 14, 2021

🔬 Research summary by Alexandrine Royer (@AlexandrineRoy2), our Educational Program Manager.

[Original paper by Edina Harbinja]


Overview: Debates surrounding internet privacy have focused mainly on the living, but what happens to our digital lives after we have passed? In this paper, Edina Harbinja offers a theoretical and doctrinal discussion of post-mortem privacy and makes a case for its legal recognition.


In January, The Independent revealed that Microsoft was granted a patent that would enable the company to develop a chatbot using deceased individuals’ personal information. Algorithms would be trained to extract images, voice, data, social media posts, electronic messages and more personal information from deceased users’ online profiles to create 2-D or 3-D representations; these digital ghosts would allow for continuous communication with living loved ones. Besides the gut reaction of creepiness raised by such chatbots, with Microsoft’s Tim O’Brien admitting the bot to be disturbing, the company’s patent pointed to the gaps in current legislation governing digital reincarnation. 

Conversations on data privacy have tended to focus on the living, with fewer considerations of how to protect digital traces we leave behind after we have passed. Who has the right over our once private and personal information? Do the dead retain a right over their digital property, or can it be bestowed like physical possessions? Such legal ambiguities are at the center of Edina Harbinja’s legal analysis of post-mortem privacy, defined as “the right of a person to preserve and control what becomes of his or her reputation, dignity, integrity, secrets or memory after death.” 

Post-Mortem Privacy and Autonomy

The notion of post-mortem privacy is a relatively new concept, and legal scholarly attention has been slow to turn to the issue of digital assets and death in data protection. National laws are mainly inconsistent on the use of data after death, leaving companies like Google and Facebook to introduce their own policies for users to determine who can access accounts in the event of an untimely passing. The complexity of post-mortem privacy, and its consequences for user property, is compounded by the range of stakeholders involved, including interactions with other internet users, families, service providers, friends and family. 

Through a brief theoretical discussion of the concept of autonomy in Western philosophy, Harbinja demonstrates how autonomy is deeply intertwined with notions of privacy, dignity and personhood. Citing legal scholar Bernal, who affirmed that ‘privacy is a crucial protector of autonomy,’ she aligns with his conception of internet privacy rights as entailing the concept of informational privacy. Seen through the legal and ethical rubric of autonomy, an individual’s right to control their informational privacy should transcend their death. 

To put it clearly, for Harbinja, “an individual should be able to exercise his autonomy online and decide what happens to their assets and privacy on death.” The “real-world” assets and wealth accumulated throughout an individual’s lifetime can be seen as analogous to their online assets. Freedom of testation- a person’s right to decide how their estate should be distributed upon death- can be extended to the online environment. While freedom of testation appears as a straightforward legal solution, it may run counter to country-specific definitions of legal personality, with Harbinja noting there is “no clear-cut answer to when the legal personality dies”. There are, however, legal examples of a person’s moral rights extending beyond their death, such as in copyright law. 

Providing a Legal and Coded Framework

Despite post-mortem privacy being aligned with the rights of autonomy present in most North American and European judicial systems, there remains a few obstacles to its legal recognition. The principal argument against the legal recognition of post-mortem privacy is a lack of actual harm to the user, meaning “the deceased cannot be harmed or hurt.” For Harbinja, such line of reasoning is logically inconsistent with legally enshrined principle of freedom of testation. Denying an individual control over their online data on the grounds of no harm caused would be akin to deny the rights of testament, as “the deceased should not be interested in deciding what happens to their property on death as they would not be present to be harmed by the allocation.” 

There are also conflicting levels of legislation that protect some aspects of post-mortem privacy, varying from laws of confidence, breach of confidence, and succession, but not the phenomenon as such. In the US, while federal law does not guarantee post-mortem privacy, certain states allow for the protection of certain ‘publicity rights’- rights to name, image, likeness- up to seventy years after a person’s passing. A similar situation is encountered in Europe. The EU’s data protection measures, famously the GDPR, apply solely to living persons, but 12 members have introduced national legislation that protects the deceased’s personal data. As highlighted by Harbinja, one notable advance in post-mortem privacy was the formation of the Committee on Fiduciary Access to Digital Assets by the Uniform Law Commission in the United States, which proposed amendments to previous acts to allow fiduciaries to manage and access digital assets. 

While certain North American and European lawmakers have yet to legislate the transmission of digital assets, many companies have begun to implement coded solutions to protect post-mortem privacy. Google launched the ‘inactive account manager’ back in 2013, which enables users to share “parts of their account data or to notify someone if they have been inactive for a certain period of time”. For Harbinja, the main issue with IAM is the verification of trusted contacts, which happens through phone numbers, and the transfer of online content to beneficiaries- which might be individuals known solely through the digital community. Individuals would need to have their beneficiaries over digital assets be explicitly expressed in a digital or traditional will. Facebook implemented a similar measure with its option of a ‘Legacy Contact’, with the platforming allowing US users to designate a person to be their Facebook estate executor after their passing. Although, as Harbinja notes, this solution allows falls short when it comes to clarifying the rights of a designated legacy contact over the rights of heirs/kin. 

Law and Life after Technology

The issue of post-mortem privacy allows us to reflect on how digital lives persist after death, and what aspects of our digital universe we wish to bestow to loved ones.  With the deceased accounts still virtually present, who gains control and readership over the masses of digital traces we have left behind? As stated by Harbinja, “post-mortem privacy rights need to be balanced with other considerations, including the same privacy interests of others and the social and personal interests in free speech and security”. 

Beyond the legal considerations, post-mortem privacy merits a broader ethical conversation- one that is not entirely dictated by Euro-American norms and values. As our online personas become increasingly interconnected with individuals scattered across the globe, we must ensure that the legal recognition of post-mortem privacy rights does not come to the detriment of individuals residing in other societies. Indeed, how do we begin to determine ownership of digital assets, especially when it comes to the murky areas of shared messages, comments, retweets, and so forth? Further scholarly research is needed to allow for reflections on post-mortem privacy beyond the scope and principle of autonomy.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Exploring the Subtleties of Privacy Protection in Machine Learning Research in QuĂ©bec 

    Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

  • Research summary:  Learning to Complement Humans

    Research summary: Learning to Complement Humans

  • Conceptualizing the Relationship between AI Explanations and User Agency

    Conceptualizing the Relationship between AI Explanations and User Agency

  • Setting the Right Expectations: Algorithmic Recourse Over Time

    Setting the Right Expectations: Algorithmic Recourse Over Time

  • Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

    Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

  • Anthropomorphism and the Social Robot

    Anthropomorphism and the Social Robot

  • Report on Publications Norms for Responsible AI

    Report on Publications Norms for Responsible AI

  • Research Summary: Trust and Transparency in Contact Tracing Applications

    Research Summary: Trust and Transparency in Contact Tracing Applications

  • Mapping the Ethicality of Algorithmic Pricing

    Mapping the Ethicality of Algorithmic Pricing

  • The Montreal AI Ethics Institute (MAIEI) Joins the AI Alliance

    The Montreal AI Ethics Institute (MAIEI) Joins the AI Alliance

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.