Your data outlives you—scattered across servers. But who actually controls your digital ghost when the agency ends?
A modern human dies only once. Their data, however, persists in fragments across servers, platforms, algorithms, photos in cloud drives, location histories, biometric traces, purchase records, unfinished drafts, and behavioural profiles. Researchers increasingly refer to this as a digital legacy of a person.
Imagine speaking to a bot that writes exactly like someone you loved — matching their pauses, phrases, tonality and private jokes — even though they are no longer alive. Large language models show how far this can go. When fine-tuned on an individual’s data, LLMs have the potential to turn resurrection and grief into a market. And profits.
This is no dystopian fiction. In 2017, Microsoft patented a chatbot designed to simulate specific people using emails, messages and social media history. In December 2025, Meta received a patent for systems capable of generating posts in the voice of deceased users. Start-ups such as HereAfter AI and You, Only Virtual already market interactive avatars built from recorded memories and digital traces. What began as memorialisation is becoming a service model.
Biological death ends agency, not data processing. The deeper question is what changes when the subject of that data can no longer consent.
Who owns our data, in life and after it?
The answer begins with a misconception: personal data is rarely “owned” in the traditional sense. Users generate data but do not fully control its storage, use, replication or monetisation. Access can be revoked if accounts are suspended or content removed. Digital assets sit uneasily within property law; they are largely licensed, not possessed. In practice, users have permission, not ownership.
After death, this permission-based model becomes even more contingent. Control shifts to platform policies, nominee provisions and incomplete national laws.
India’s Digital Personal Data Protection Act allows individuals to nominate a representative to exercise certain data rights after death, including access and erasure, but it does not clarify ownership or resolve conflicts between privacy and inheritance. The European Union’s GDPR enables erasure requests during life, yet post-mortem application varies by member state. In the United States, platform policies dominate,
though state laws such as the Uniform Fiduciary Access to Digital Assets Act provide partial access rights. The result is inconsistency rather than clarity.
What are the ethics of using this data?
If data persists as training material or a commercial asset, the dead continue to shape systems they cannot influence. Their messages and profiles remain structured datasets long after consent is impossible. Participation becomes posthumous and involuntary.
The dead may have no use for privacy, but the living have a right not to be haunted by commercialised simulations. Should consent extend beyond death? Should individuals be able to prohibit simulation? Should true deletion be recognised as a post-mortem privacy right? The law is still catching up. Ethics has barely begun.