'Griefbots' could use AI to haunt relatives from beyond the grave, ethicists warn

  • Nascent 'griefbots' or 'deadbots' allow users to chat with the deceased.

  • But if unchecked, they could cause some serious psychological harm, Cambridge researchers said.

  • Safeguards should keep kids away from the bots and allow users to turn them off, a study suggested.

AI could make your relatives haunt you from beyond the grave, researchers are warning.

The budding "digital afterlife industry" could cause serious psychological harm to those in mourning if left unchecked, according to a new study out of the University of Cambridge.

"Griefbots" or "deadbots" are starting to crop up, researchers said, and it sounds like something right out of an episode of "Black Mirror."

These bots use generative AI to allow people to have text and voice conversations with the deceased, using their past digital footprints to conjure a likeness.

Some examples include Project December and HereAfter AI, according to the study, which also imagined fictional companies to explore the possible consequences.

(The griefbot industry is also starting to take off in China.)

But AI ethicists at Cambridge's Leverhulme Centre for the Future of Intelligence warn that AI could one day push ads on grieving relatives or confuse children.

The researchers imagined scenarios in which a child is tricked by an AI recreation of their dead parent to meet someone in real life, or a woman gets pitched a food delivery app from her dead grandmother.

The study said greifbots need to have an "off" button. That way users can disengage with the replica and won't feel haunted. But this could also prove difficult depending on the kind of service contract a deceased person has consented to, researchers noted.

Co-author Dr. Tomasz Hollanek suggested a griefbot might even need to be retired via a "digital funeral."

"This area of AI is an ethical minefield," added co-author Dr. Katarzyna Nowaczyk-Basińska. "The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded."

Researchers came up with a list of principles to ethically create griefbots.

In addition to allowing them to be shut down, researchers suggested the bots should have visible disclaimers about risks, restrictions that bar children from using them, and emphasizing "mutual consent" for both the deceased and users.

Read the original article on Business Insider