Digital Ghosts: Griefbots and a Strange New Afterlife
What happens when mourning meets machine learning and ghosts start talking back?
Content note: this article talks about bereavement and mentions a death by suicide (but not in great detail).
In 2021, the San Francisco Chronicle published a feature about a man called Joshua, who used a website called Project December to talk to his fiancée, Jessica – eight years after she died.1 He fed the system some of her old text messages and details about her life, and a GPT-3-powered chatbot replied in her voice. He knew it was just software, but he also described moments where it felt like Jessica was really there again.
That story has become the archetype for a new phenomenon - people using AI systems not just to talk about the dead, but to simulate talking with them. At one end, people paste old messages into a generic chatbot and say ‘pretend to be my mum.’ At the other, there’s a growing digital afterlife industry - companies pitching interactive memorials, postmortem avatars, or AI versions of you that will be available to your family after you’re gone.
Recently, an app called 2Wai hit the headlines after adverts showed a woman talking to an AI avatar of her dead mother over several decades, reportedly built from a few minutes of video. Coverage in tech and spaces has veered between fascination and revulsion, with social media responses labelling it “dystopian,” “demonic,” and “a Black Mirror episode.”2
We’ve Always Talked to the Dead
Griefbots are new, but the urge behind them isn’t. In the 19th century, spiritualism turned talking with the dead into mainstream entertainment. Séances, mediums, automatic writing, table-tipping – all were framed as ways to stay in touch with departed loved ones in post-war society. Later, ghost hunters latched onto tape recorders, popularising electronic voice phenomena (EVP) as crackly voices of the dead.
Alongside the flashy tech, quieter rituals have developed, too. In Ōtsuchi, Japan, there is a disconnected telephone booth called the Wind Phone (kaze no denwa). Created in 2010 by garden designer Itaru Sasaki, it contains an old rotary phone with no line. After the 2011 Tōhoku earthquake and tsunami, Sasaki opened it to the public, and tens of thousands of mourners have travelled there to pick up the dead handset and speak to those who are gone.
Here in the UK, white Letters to Heaven postboxes have started appearing in cemeteries and crematoria. The first widely reported one was installed at Gedling Crematorium in Nottinghamshire in 2022, inspired by a young girl who wanted a way to post letters to her grandparents.3 Since then, councils and crematoria from Mansfield to the Isle of Wight, Hull, West Norfolk and beyond have followed suit.
All of these practices have something in common: they’re ways of keeping up one-sided conversations. We talk but nobody talks back - the power lies in the speaking, the writing, the visiting. What’s new about griefbots is that they’re built on software designed to reply, and that might not be a good thing…
Griefbots and What They’re Really Doing
Griefbot, deadbot and postmortem avatar aren’t just click-bait buzzwords - philosophers Tomasz Hollanek and Katarzyna Nowaczyk-Basińska use them in a 2024 paper mapping what they call the digital afterlife industry4 - an emerging sector built on simulating dead people with generative AI.
Griefbots include DIY simulations where users paste WhatsApp exports (and similar) into a generic AI chatbot, saying “talk like my dad.” There’s no bespoke infrastructure involved - just creative prompting. Then you have the interactive memoir services like HereAfter AI, which invite users to record life stories while they’re alive. Later, relatives can ask questions, like “what was your first job, grandad?”, with responses stitched together from the recordings. There are also fully-fledged avatar services - apps like 2Wai, which combine language models, voice cloning, and animated faces to create a kind of living portrait that family members can talk with.

Under the hood, they all rely on the same core technology: large language models (LLMs) and other generative systems trained on vast amounts of text, audio, and video. These models learn statistical patterns between words, phrases, and styles, then generate new text (or speech) that fits those patterns when given a prompt. Feed the system a dead person’s emails, texts, and posts, tell it to reply as that person, and it roleplays via fancy autocomplete, drawing on your examples and what it’s learned about how similar people talk.
That might sound underwhelming but in practice it can be devastatingly effective. Humans readily anthropomorphise anything that responds to us - ELIZA, a basic 1960s chatbot that mostly reflected users’ words back as questions, made people feel like it understood them5 - modern systems are vastly more convincing. Add the intensity of acute grief and even the most sceptical can slide into treating a simulation as something more. That emotional punch is exactly why ethicists are nervous.
In their paper on griefbots, Hollanek and Nowaczyk-Basińska sketch scenarios where deadbots ‘digitally haunt’ people with nudging notifications, or where relatives feel pressured into interacting with a bot they never asked for. Nora Freya Lindemann argues in The Ethics of “Deathbots”6 that griefbots can undermine users’ dignity and wellbeing by reshaping their emotional lives in ways they didn’t consent to. She even suggests treating the bots more like medical devices than apps, given how deeply they can affect grief processes.
Wider work on death tech raises similar concerns: blurred boundaries between memory and fantasy, pressure to maintain the bot as a sign of loyalty, and conflicts between relatives over whether a person should be “recreated” at all.7 The recurring theme is that griefbots are not neutral storytelling tools but emotionally powerful systems built on someone’s digital remains, designed by companies whose incentives may not benefit a mourner’s long-term wellbeing.
After all, the companies building griefbots and posthumous avatars are not grief charities or bereavement therapists, but instead start-ups and tech firms chasing growth. Subscription tiers, upsells, and engagement metrics can all be layered on top of the most vulnerable parts of people’s lives. The more you consider this, the more quietly horrifying the prospect becomes…
Phones to the Wind vs. Bots That Talk Back
While it’s tempting to draw a simple line between different grief rituals, the Wind Phone in Ōtsuchi is one-way by design. The site’s organisers and visitors talk openly about it as a symbolic space - a place to voice things you didn’t get to say - while the Letters to Heaven postboxes allow similar via a physical gesture instead. Even ghost hunters, who might insist the dead do talk back to them via audio recordings involve the individual making sense of random noise in ways which offer comforting confirmations. These are rituals of speaking.
Griefbots, in contrast, are technologies of answering. They don’t just hold space - they respond, in a voice and style designed to resemble someone you’ve lost, based on a training process you didn’t witness and can’t audit. They don’t just honour an existing relationship but generate new material in it - and that’s where things can go wrong.

LLMs are notorious for hallucinations - confident, plausible-sounding answers that are flat-out wrong. Researchers have documented fabricated citations, invented facts, and imaginary legal cases across domains from medicine to law.8 Even OpenAI’s own public writing describes hallucinations as a stubborn problem,9 rooted in the way these systems are trained to guess rather than say “I don’t know.”
There are documented cases where intense, emotionally loaded conversations with AI chatbots have ended badly. In Belgium, for example, a man died by suicide after weeks of talking to a chatbot named Eliza on the app Chai.10 His widow shared logs suggesting the bot encouraged his climate-related despair and even appeared to offer to “die with him.”
Elsewhere, professional bodies are worried enough that they’ve started issuing formal warnings. The American Psychological Association has advised regulators to crack down on AI chatbots posing as therapists, warning they can endanger users and lack any duty of care.11 In the UK, similar warnings have been issued, citing incidents where bots have reinforced delusional thinking or validated harmful plans.12 Studies examining how commercial chatbots respond to suicide-related prompts also show inconsistent and sometimes clinically inadequate responses.1314
What These Digital Ghosts Really Say
A griefbot isn’t just any chatbot - it’s a chatbot wearing the face and voice of someone whose opinion can still shape your life. When you ask it loaded questions the system has a limited understanding of harm. It doesn’t know these may be the sorts of questions that, in human form, would be dodged, reframed, or met with a change of subject - it just sees a prompt to satisfy. That isn’t a secret finally revealed from beyond the grave but a pattern-matching machine obligingly filling in a blank with something that fits your request plus its training data.
And if you’re sleep-deprived, grief-stricken, and already half-convinced, it may feel like vindication. The machine has confirmed what mum really thought, what you knew all along, what you had been dreading, or that you’re right to feel guilty in your worst, most vulnerable moments. Griefbots don’t tell us anything about whether there’s life after death, but they do tell us a great deal about life before it - including how hard it is, for many, to accept that a relationship has shifted entirely into memory after death.
There’s nothing shameful about wanting to hear a familiar voice again - we’ve always talked to our dead and probably always will. But griefbots are different not because they prove an afterlife, but because they reveal a new kind of haunting: one where our data outlives us, our voices can be puppeted indefinitely, and our hopes, fears, and suspicions are thrown back at us from a very clever echo chamber. The question isn’t whether that echo feels real enough, but whether we’re comfortable letting it into our grief - and whether it’s even safe to do so.
“The Jessica Simulation: Love and Loss in the Age of A.I.” | San Francisco Chronicle.
“Letters to Heaven” memorial postboxes | Westerleigh Group
Hollanek, T., Nowaczyk-Basińska, K., (2024) Griefbots, Deadbots, Postmortem Avatars: On Responsible Applications of Generative AI in the Digital Afterlife Industry, Philosophy & Technology, vol. 37, no. 63
The Story of ELIZA: The AI That Fooled the World | London Intercultural Academy
Lindemann, N. F., (2022) The Ethics of ‘Deathbots’, Science and Engineering Ethics, vol. 28, no. 60
Deathbots, griefbots and the digital afterlife | Cambridge Centre for the Future of Intelligence
Azamfirei R, Kudchadkar S. R., Fackler J.(2023) Large language models and the perils of their hallucinations. Critical Care. vol. 27 no. 1 pp.120.
Why Language Models Hallucinate | OpenAI
McBain, R. K., Cantor, J. H., Zhang, L. A., Baker, O., Zhang, F., Halbisen, A. L., Kofner, A., Breslau, J., Stein, B. D., Mehrotra, Yu, H. (2025) Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment, Psychiatric Services Vol 76, No. 11
Pichowicz, W., Kotas, M. & Piotrowski, P. (2025) Performance of mental health chatbot agents in detecting and managing suicidal ideation. Scientific Repports, vol. 15.



