• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Post-Mortem Privacy 2.0: Theory, Law and Technology

March 14, 2021

šŸ”¬ Research summary by Alexandrine Royer (@AlexandrineRoy2), our Educational Program Manager.

[Original paper by Edina Harbinja]


Overview: Debates surrounding internet privacy have focused mainly on the living, but what happens to our digital lives after we have passed? In this paper, Edina Harbinja offers a theoretical and doctrinal discussion of post-mortem privacy and makes a case for its legal recognition.


In January, The Independent revealed that Microsoft was granted a patent that would enable the company to develop a chatbot using deceased individuals’ personal information. Algorithms would be trained to extract images, voice, data, social media posts, electronic messages and more personal information from deceased users’ online profiles to create 2-D or 3-D representations; these digital ghosts would allow for continuous communication with living loved ones. Besides the gut reaction of creepiness raised by such chatbots, with Microsoft’s Tim O’Brien admitting the bot to be disturbing, the company’s patent pointed to the gaps in current legislation governing digital reincarnation.Ā 

Conversations on data privacy have tended to focus on the living, with fewer considerations of how to protect digital traces we leave behind after we have passed. Who has the right over our once private and personal information? Do the dead retain a right over their digital property, or can it be bestowed like physical possessions? Such legal ambiguities are at the center of Edina Harbinja’s legal analysis of post-mortem privacy, defined as “the right of a person to preserve and control what becomes of his or her reputation, dignity, integrity, secrets or memory after death.” 

Post-Mortem Privacy and Autonomy

The notion of post-mortem privacy is a relatively new concept, and legal scholarly attention has been slow to turn to the issue of digital assets and death in data protection. National laws are mainly inconsistent on the use of data after death, leaving companies like Google and Facebook to introduce their own policies for users to determine who can access accounts in the event of an untimely passing. The complexity of post-mortem privacy, and its consequences for user property, is compounded by the range of stakeholders involved, including interactions with other internet users, families, service providers, friends and family. 

Through a brief theoretical discussion of the concept of autonomy in Western philosophy, Harbinja demonstrates how autonomy is deeply intertwined with notions of privacy, dignity and personhood. Citing legal scholar Bernal, who affirmed that ‘privacy is a crucial protector of autonomy,’ she aligns with his conception of internet privacy rights as entailing the concept of informational privacy. Seen through the legal and ethical rubric of autonomy, an individual’s right to control their informational privacy should transcend their death. 

To put it clearly, for Harbinja, “an individual should be able to exercise his autonomy online and decide what happens to their assets and privacy on death.” The “real-world” assets and wealth accumulated throughout an individual’s lifetime can be seen as analogous to their online assets. Freedom of testation- a person’s right to decide how their estate should be distributed upon death- can be extended to the online environment. While freedom of testation appears as a straightforward legal solution, it may run counter to country-specific definitions of legal personality, with Harbinja noting there is “no clear-cut answer to when the legal personality dies”. There are, however, legal examples of a person’s moral rights extending beyond their death, such as in copyright law. 

Providing a Legal and Coded Framework

Despite post-mortem privacy being aligned with the rights of autonomy present in most North American and European judicial systems, there remains a few obstacles to its legal recognition. The principal argument against the legal recognition of post-mortem privacy is a lack of actual harm to the user, meaning “the deceased cannot be harmed or hurt.” For Harbinja, such line of reasoning is logically inconsistent with legally enshrined principle of freedom of testation. Denying an individual control over their online data on the grounds of no harm caused would be akin to deny the rights of testament, as “the deceased should not be interested in deciding what happens to their property on death as they would not be present to be harmed by the allocation.” 

There are also conflicting levels of legislation that protect some aspects of post-mortem privacy, varying from laws of confidence, breach of confidence, and succession, but not the phenomenon as such. In the US, while federal law does not guarantee post-mortem privacy, certain states allow for the protection of certain ‘publicity rights’- rights to name, image, likeness- up to seventy years after a person’s passing. A similar situation is encountered in Europe. The EU’s data protection measures, famously the GDPR, apply solely to living persons, but 12 members have introduced national legislation that protects the deceased’s personal data. As highlighted by Harbinja, one notable advance in post-mortem privacy was the formation of the Committee on Fiduciary Access to Digital Assets by the Uniform Law Commission in the United States, which proposed amendments to previous acts to allow fiduciaries to manage and access digital assets. 

While certain North American and European lawmakers have yet to legislate the transmission of digital assets, many companies have begun to implement coded solutions to protect post-mortem privacy. Google launched the ‘inactive account manager’ back in 2013, which enables users to share “parts of their account data or to notify someone if they have been inactive for a certain period of time”. For Harbinja, the main issue with IAM is the verification of trusted contacts, which happens through phone numbers, and the transfer of online content to beneficiaries- which might be individuals known solely through the digital community. Individuals would need to have their beneficiaries over digital assets be explicitly expressed in a digital or traditional will. Facebook implemented a similar measure with its option of a ‘Legacy Contact’, with the platforming allowing US users to designate a person to be their Facebook estate executor after their passing. Although, as Harbinja notes, this solution allows falls short when it comes to clarifying the rights of a designated legacy contact over the rights of heirs/kin. 

Law and Life after Technology

The issue of post-mortem privacy allows us to reflect on how digital lives persist after death, and what aspects of our digital universe we wish to bestow to loved ones.  With the deceased accounts still virtually present, who gains control and readership over the masses of digital traces we have left behind? As stated by Harbinja, “post-mortem privacy rights need to be balanced with other considerations, including the same privacy interests of others and the social and personal interests in free speech and security”. 

Beyond the legal considerations, post-mortem privacy merits a broader ethical conversation- one that is not entirely dictated by Euro-American norms and values. As our online personas become increasingly interconnected with individuals scattered across the globe, we must ensure that the legal recognition of post-mortem privacy rights does not come to the detriment of individuals residing in other societies. Indeed, how do we begin to determine ownership of digital assets, especially when it comes to the murky areas of shared messages, comments, retweets, and so forth? Further scholarly research is needed to allow for reflections on post-mortem privacy beyond the scope and principle of autonomy.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Cinderella’s shoe won’t fit Soundarya: An audit of facial processing tools on Indian faces

    Cinderella’s shoe won’t fit Soundarya: An audit of facial processing tools on Indian faces

  • The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

    The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

  • Assessing the nature of large language models: A caution against anthropocentrism

    Assessing the nature of large language models: A caution against anthropocentrism

  • AI Ethics and Ordoliberalism 2.0: Towards A ā€˜Digital Bill of Rights

    AI Ethics and Ordoliberalism 2.0: Towards A ā€˜Digital Bill of Rights

  • On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

    On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

  • Ubuntu’s Implications for Philosophical Ethics

    Ubuntu’s Implications for Philosophical Ethics

  • The Whiteness of AI (Research Summary)

    The Whiteness of AI (Research Summary)

  • Research summary: Integrating ethical values and economic value to steer progress in AI

    Research summary: Integrating ethical values and economic value to steer progress in AI

  • Jack Clark Presenting the 2022 AI Index Report

    Jack Clark Presenting the 2022 AI Index Report

  • Fairness implications of encoding protected categorical attributes

    Fairness implications of encoding protected categorical attributes

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.