skip to content

School of Arts and Humanities

 

Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones

Artificial intelligence that allows users to hold text and voice conversations with lost loved ones runs the risk of causing psychological harm and even digitally “haunting” those left behind without design safety standards, according to University of Cambridge researchers. 

‘Deadbots’ or ‘Griefbots’ are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies are already offering these services, providing an entirely new type of “postmortem presence”.

AI ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence outline three design scenarios for platforms that could emerge as part of the developing  “digital afterlife industry”, to show the potential consequences of careless design in an area of AI they describe as “high risk”.

The research, published in the journal Philosophy and Technology, highlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you”.

When the living sign up to be virtually re-created after they die, resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally “stalked by the dead”.

Even those who take initial comfort from a ‘deadbot’ may get drained by daily interactions that become an “overwhelming emotional weight”, argue researchers, yet may also be powerless to have an AI simulation suspended if their now-deceased loved one signed a lengthy contract with a digital afterlife service. 

“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI).

“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.

“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

Platforms offering to recreate the dead with AI for a small fee already exist, such as ‘Project December’, which started out harnessing GPT models before developing its own systems, and apps including ‘HereAfter’. Similar services have also begun to emerge in China.

One of the potential scenarios in the new paper is “MaNana”: a conversational AI service allowing people to create a deadbot simulating their deceased grandmother without consent of the “data donor” (the dead grandparent). 

The hypothetical scenario sees an adult grandchild who is initially impressed and comforted by the technology start to receive advertisements once a “premium trial” finishes. For example, the chatbot suggesting ordering from food delivery services in the voice and style of the deceased.

The relative feels they have disrespected the memory of their grandmother, and wishes to have the deadbot turned off, but in a meaningful way – something the service providers haven’t considered.

“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Hollanek, also from Cambridge’s LCFI.

“Methods and even rituals for retiring deadbots in a dignified way should be considered. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context.”

“We recommend design protocols that prevent deadbots being utilised in disrespectful ways, such as for advertising or having an active presence on social media.”

While Hollanek and Nowaczyk-Basińska say that designers of re-creation services should actively seek consent from data donors before they pass, they argue that a ban on deadbots based on non-consenting donors would be unfeasible.

They suggest that design processes should involve a series of prompts for those looking to “resurrect” their loved ones, such as ‘have you ever spoken with X about how they would like to be remembered?’, so the dignity of the departed is foregrounded in deadbot development.    

Another scenario featured in the paper, an imagined company called “Paren’t”, highlights the example of a terminally ill woman leaving a deadbot to assist her eight-year-old son with the grieving process.

While the deadbot initially helps as a therapeutic aid, the AI starts to generate confusing responses as it adapts to the needs of the child, such as depicting an impending in-person encounter.

The researchers recommend age restrictions for deadbots, and also call for “meaningful transparency” to ensure users are consistently aware that they are interacting with an AI. These could be similar to current warnings on content that may cause seizures, for example.

The final scenario explored by the study – a fictional company called “Stay” – shows an older person secretly committing to a deadbot of themselves and paying for a twenty-year subscription, in the hopes it will comfort their adult children and allow their grandchildren to know them.

After death, the service kicks in. One adult child does not engage, and receives a barrage of emails in the voice of their dead parent. Another does, but ends up emotionally exhausted and wracked with guilt over the fate of the deadbot. Yet suspending the deadbot would violate the terms of the contract their parent signed with the service company.

“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations,” said Hollanek.

“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”

The researchers call for design teams to prioritise opt-out protocols that allow potential users terminate their relationships with deadbots in ways that provide emotional closure.

Added Nowaczyk-Basińska: “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.”    

Cambridge researchers lay out the need for design safety protocols that prevent the emerging “digital afterlife industry” causing social and psychological harm. 

Tomasz HollanekA visualisation of one of the design scenarios highlighted in the latest paper


The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes
Categories: Latest news

Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones

In the News - 12 hours 22 min ago

Artificial intelligence that allows users to hold text and voice conversations with lost loved ones runs the risk of causing psychological harm and even digitally “haunting” those left behind without design safety standards, according to University of Cambridge researchers. 

‘Deadbots’ or ‘Griefbots’ are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies are already offering these services, providing an entirely new type of “postmortem presence”.

AI ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence outline three design scenarios for platforms that could emerge as part of the developing  “digital afterlife industry”, to show the potential consequences of careless design in an area of AI they describe as “high risk”.

The research, published in the journal Philosophy and Technology, highlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you”.

When the living sign up to be virtually re-created after they die, resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally “stalked by the dead”.

Even those who take initial comfort from a ‘deadbot’ may get drained by daily interactions that become an “overwhelming emotional weight”, argue researchers, yet may also be powerless to have an AI simulation suspended if their now-deceased loved one signed a lengthy contract with a digital afterlife service. 

“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI).

“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.

“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

Platforms offering to recreate the dead with AI for a small fee already exist, such as ‘Project December’, which started out harnessing GPT models before developing its own systems, and apps including ‘HereAfter’. Similar services have also begun to emerge in China.

One of the potential scenarios in the new paper is “MaNana”: a conversational AI service allowing people to create a deadbot simulating their deceased grandmother without consent of the “data donor” (the dead grandparent). 

The hypothetical scenario sees an adult grandchild who is initially impressed and comforted by the technology start to receive advertisements once a “premium trial” finishes. For example, the chatbot suggesting ordering from food delivery services in the voice and style of the deceased.

The relative feels they have disrespected the memory of their grandmother, and wishes to have the deadbot turned off, but in a meaningful way – something the service providers haven’t considered.

“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Hollanek, also from Cambridge’s LCFI.

“Methods and even rituals for retiring deadbots in a dignified way should be considered. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context.”

“We recommend design protocols that prevent deadbots being utilised in disrespectful ways, such as for advertising or having an active presence on social media.”

While Hollanek and Nowaczyk-Basińska say that designers of re-creation services should actively seek consent from data donors before they pass, they argue that a ban on deadbots based on non-consenting donors would be unfeasible.

They suggest that design processes should involve a series of prompts for those looking to “resurrect” their loved ones, such as ‘have you ever spoken with X about how they would like to be remembered?’, so the dignity of the departed is foregrounded in deadbot development.    

Another scenario featured in the paper, an imagined company called “Paren’t”, highlights the example of a terminally ill woman leaving a deadbot to assist her eight-year-old son with the grieving process.

While the deadbot initially helps as a therapeutic aid, the AI starts to generate confusing responses as it adapts to the needs of the child, such as depicting an impending in-person encounter.

The researchers recommend age restrictions for deadbots, and also call for “meaningful transparency” to ensure users are consistently aware that they are interacting with an AI. These could be similar to current warnings on content that may cause seizures, for example.

The final scenario explored by the study – a fictional company called “Stay” – shows an older person secretly committing to a deadbot of themselves and paying for a twenty-year subscription, in the hopes it will comfort their adult children and allow their grandchildren to know them.

After death, the service kicks in. One adult child does not engage, and receives a barrage of emails in the voice of their dead parent. Another does, but ends up emotionally exhausted and wracked with guilt over the fate of the deadbot. Yet suspending the deadbot would violate the terms of the contract their parent signed with the service company.

“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations,” said Hollanek.

“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”

The researchers call for design teams to prioritise opt-out protocols that allow potential users terminate their relationships with deadbots in ways that provide emotional closure.

Added Nowaczyk-Basińska: “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.”    

Cambridge researchers lay out the need for design safety protocols that prevent the emerging “digital afterlife industry” causing social and psychological harm. 

Tomasz HollanekA visualisation of one of the design scenarios highlighted in the latest paper


The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

2024-05-16 18:00 - Getting to Diversity: A Talk About Which Diversity Initiatives Work and Which Fail

What's On - Tue, 07/05/2024 - 15:04
Frank Dobbin is Henry Ford II Professor of the Social Sciences, and chair of the Department of Sociology, at Harvard.

2024-05-09 18:00 - What Happens to Our Brains in Space?

What's On - Tue, 07/05/2024 - 08:32
Abraham Alsawaf, a 4th-year medical student (Magdalene) studying neuroplasticity will discuss what happens to our brains in space.

2024-05-10 18:00 - “Shostakovich's Symphony No. 5”, an illustrated talk by Prof Marina Frolova-Walker

What's On - Tue, 07/05/2024 - 08:32
Prof Marina Frolova-Walker a co-author with Jonathan Walker of the recently published book “Shostakovich's Symphony No. 5” will talk about the research for this book about Shostakovich's most controversial symphony which was composed at the height of Stalin's Purges saved the composer from official disfavour and deeply moved audiences.

2024-05-15 18:00 - Cyber Security in Higher Education: Challenges and Opportunities

What's On - Tue, 07/05/2024 - 08:32
Join Heather Adkins VP, Security Engineering & Head of Google’s Office of Cybersecurity Resilience, and Benefactor Fellow, for a talk and discussion with the Master of Emmanuel College, Doug Chalmers on cyber security in higher education, its challenges and opportunities.

2024-05-16 18:00 - On Set With Megan Hunter. Screening of 'The End We Start From'

What's On - Tue, 07/05/2024 - 08:32
We would like to welcome you to a screening of 'The End We Start From', which was adapted from a book by Megan Hunter. Megan will join us for a Q&A with Dr Laura McMahon after the film, which will be followed by a drinks reception.

2024-05-16 19:30 - Art at the heart of Leonardo da Vinci's anatomical studies

What's On - Thu, 02/05/2024 - 08:17
During the Renaissance, artist and scientist Leonardo da Vinci described the heart's workings with a level of accuracy extraordinary for his day. In this talk, heart surgeon and artist Francis Wells will discuss da Vinci's sketches through a contemporary lens.

2024-05-06 17:00 - Movie Screening: The Cost of Convenience

What's On - Thu, 02/05/2024 - 08:11
Movie Screening: The Cost Of Convenience examines how internet platforms are impacting our mental health, restructuring our communities, threatening our democracy, and violating our human rights.

2024-05-07 17:30 - AI and Human Rights

What's On - Thu, 02/05/2024 - 08:11
Join us for an engaging expert panel discussion on the intersection of artificial intelligence and human rights. Moderated by Dr. Ella McPherson and Dr. Sharath Srinivasan, Co-Directors of the Centre of Governance and Human Rights, this event will feature insightful contributions from leading thinkers in the field.

2024-05-09 09:30 - Indigenous film, art and activism: counter-cartographies of the Amazon

What's On - Tue, 30/04/2024 - 09:57
This one-day symposium explores the relationship between film, art, and activism in the Amazon. Along with in-person discussions by international speakers, the programme includes a number of contemporary Amazonian films made by Indigenous creators who are using film to amplify their voices and contest historical marginalisation.

2024-05-13 13:30 - Activism and science: what space is there in science for activism around the climate and biodiversity crises?

What's On - Tue, 30/04/2024 - 09:56
In our half-day event, open to academics, students and members of the public, we will explore these and other questions.

2024-05-11 19:30 - Shostakovich and Tchaikovsky

What's On - Tue, 30/04/2024 - 09:56
Concert by City of Cambridge Symphony Orchestra with Simon Callaghan (piano)

2024-05-01 18:00 - Liminality and the Question of the Palestinian Refugees: Dr Adam Ramadan (Birmingham)

What's On - Tue, 30/04/2024 - 09:56
liminal /?l?m.?.n?l/ (adj.) between or belonging to two different places, states, etc.

2024-05-01 18:00 - ERASING HOME: Lecture by Dr Ammar Azzouz (Oxford)

What's On - Tue, 30/04/2024 - 09:56
Dr Ammar Azzouz is author of 'Domicide: Destruction of Home in Syria' (Bloomsbury, 2023).

2024-05-07 17:30 - The cartographic commissions of John, 2nd Duke of Montagu (1690-1749)

What's On - Sat, 27/04/2024 - 12:49
A talk by Jana C Schuster (Historic England & New York University) in the 'Cambridge Seminars in the History of Cartography' series.

2024-05-14 17:00 - The Monarch's History Men: What has changed over three centuries?

What's On - Thu, 25/04/2024 - 15:54
The current Regius Professor, Sir Christopher Clark, will host a lecture by Professor Ludmilla Jordanova (Durham).

2024-05-09 17:30 - Reflections on the legacy of Sir Joseph Rotblat in a time of nuclear risk

What's On - Thu, 25/04/2024 - 15:54
Reflections on the legacy of Sir Joseph Rotblat in a time of rising nuclear risk: the responsibility of scientists

2024-05-02 17:30 - IN CONVERSATION WITH Doreen Lawrence, Baroness Lawrence of Clarendon OBE

What's On - Thu, 25/04/2024 - 15:54
IN CONVERSATION WITH Doreen Lawrence, Baroness Lawrence of Clarendon OBE

2024-05-05 13:15 - CU Brass Ensemble at the Painted Church

What's On - Mon, 22/04/2024 - 09:21
The Cambridge University Brass Ensemble (CUBE) returns to the Painted Church once again for a Sunday lunchtime recital in a varied programme for all.