Can Data Die? From Cyber Immortality to Digital Spiritism.
“Die not, poor Death, nor yet canst thou kill me.”
In “Death, be not Proud”, a poem by British 16th century metaphysical poet John Donne, there is something greater than death.
AI seems to have the answer: Cyber Immortality. Digital Immortality. Virtual Immortality.
Commonly, cyber immortality might be defined by an active or passive continiuous digital presence after death.
In July 2021, an article published by the San Francisco Chronicle caught my vivid attention. Actually, it became deeply mind-boggling.
It was a grieving man who created an AI chatbot of his deceased girlfriend to be able to communicate with her. Everything about the article was questionable, edgy, beyond mere ethics.
"The Jessica Simulation"
“The Jessica Simulation”, a modern AI tale like we might read more and more about in the coming years.
Joshua, a 33-year-old Canadian freelance writer could not get over his ex-fiancée’s death. So he created a chatbot to communicate with Jessica who died seven years ago.
Here in short, some excerpts of the San Francisco Chronicle article to put you into context.
“A 33-year-old freelance writer, Joshua had existed in quasi-isolation for years before the pandemic, confined by bouts of anxiety and depression.”
“Jessica had died eight years earlier, at 23, from a rare liver disease. Joshua had never gotten over it.”
“One night he logged onto a mysterious chat website called Project December.” (Project December is powered by GPT-3)
“Joshua continued to experiment, he realized there was no rule preventing him from simulating real people.”
“There was nothing strange, he thought, about wanting to reconnect with the dead: People do it all the time, in prayers and in dreams. In the last year and a half, more than 600,000 people in the U.S. and Canada have died of COVID-19, often suddenly, without closure for their loved ones, leaving a raw landscape of grief.”
“Joshua had kept all of Jessica’s old texts and Facebook messages, and it only took him a minute to pinpoint a few that reminded him of her voice. He loaded these into Project December, along with an “intro paragraph” about the “Jessica ghost”.
The first screen to enter project Jessica.
Please note the phrasing… This “conversation” as it were real. And “Jessica’s ghost”…
“This conversation is between grief-striken Joshua and Jessica’s ghost. (as it reads on the program’s terminal.)”
First dialogue
Here it is Jessica who “corrects” Joshua. The roles are inverted. The ghost is refering to reality. “How can you talk to dead people?”
The conversation is utterly emotional…
This hallucinating dialogue between data of the deceased woman and the grieving man is quite border line.
The program uses endearing terms like “honey” and even ironic smiley shortcuts, which were probably part of the emails, letters and texts Joshua uploaded into the program.
The phsycological mind-game between data and the grieving man gets more and more disturbing.
This psychological program is far more sophisticated than Joe Weizenbaum’s Eliza introduced in the 1960s. It was a simple psychological script called Doctor and was meant as a parody to show the superficial communication between humans and a machine.
However the eminent MIT professor was utterly shocked by the result. He was in disbelief to see how quickly and deeply people developed human-like feelings for the computer program. Even his own secretary.
Sixty years later, digital therapeutics (DTx) in mental health has become a rapidly growing field. People are even more hooked to machines.
Last year, Insider Intelligence expected the DTx space to hit nearly $9 billion by 2025, but its new forecasts expect DTx to be a $56 billion global opportunity by 2025.
But in the present case, we are entering another stratosphere, namely Thanatechnologies. From the Greek Thanatos: the Greek God of Death.
From Grief to Thanatechnology
Thanatechnology is a term initially used in clinical psychology by Dr Carla Sofka from Siena College.
“Thanatechnology is a word that I invented back in 1996 [ …] to describe how these new technologies were being used in death education and grief counselling.
And now of course, we have so much more with social media and social networking sites and apps on phones, and social robots that can capture mind files that preserve our consciousness and our ability to communicate with people we love forever .
I can’t even begin to guess what kinds of technology will be available 20 years from now.”
Thanatechnology is any kind of technology that can be used to deal with death, grief, loss, and fatal illness.
In the present case, thanatechnology becomes a tool to revive a deceased. It no longer is about carthartic talking or writing, but about communicating with an AI that simulates a dead person.
Beyond Black Mirror
In 2013, fans of the series Black Mirror, might remember the episod “Be right back”. It had the exact storyline as Joshua’s.
Martha (Hayley Atwell) and Ash (Domhnall Gleeson) are a young couple who move to a remote house in the countryside. Ash is a social media addict and compulsively checks his phone for updates on his social network pages. The day after moving into the house, Ash is killed returning the hire van. At the funeral, Martha’s friend Sara (Sinead Matthews) tells her about a new online service that lets people stay in touch with the deceased. By using all of his past online communications and social media profiles, a new “Ash” can be created virtually. Martha rejects the idea outright, but Sara signs Martha up to the service anyway, without telling her. Source Black Mirror Wiki.
In the series, the Black Mirror writers even go further. Which would be very likely today or within a few years.
A robotic, soft-skinned, body where the program can be uploaded onto. The Android is upgrading and uptuning itself. In the episod it corrects itself some minor flaws like a mole on the neck.
The end of the episod is in line with what we live right now. But spoiler here…
HER by Spike Jonze
Spike Jonze’s movie “Her” is a softer version of thanatechnology and loss since Joaquin Phoenix as Theorore is grieving a past relationship. Depressed he seeks solace in a vocal AI program called Sam (not Alexa). Sam’s voice belongs to actress Scarlett Johansson. Sam cheers Theorore up and brings him back to life. Inevitably he falls in love with the AI. The end is quite foreseeable too.
Delete data, forget data, erase data vs GDPR and ML
On May 25, 2018, GDPR was acted. There were numeruous novelties in terms of protecting European citizen’s data. One of them was the “Right to be Forgotten” or “Right to Erasure”.
Article 17 about the “Right to Erasure” or the “Right to be Forgotten” :
1. The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay where one of the following grounds applies:
(a) the personal data are no longer necessary in relation to the purposes for which they were collected or otherwise processed;
(b), (c), (d) can be found in the original GDPR text.
(e) the personal data have to be erased for compliance with a legal obligation in Union or Member State law to which the controller is subject;
But attached to erasure or deletion, paramount concepts are “Identity”, “Accuracy”, “Revocation” or “Consent”.
According to article 5 1. in relation to personal data, personal data shall be :
(d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay (‘accuracy’).
What about ML or DL programs and deletion of data points?
How easy is it to “erase” or “delete” data?
“If a data point represents ‘defining’ data that was involved in the earliest, high-dimensional part of the training, then removing it can radically redefine how the model functions.”
as explained by Martin Anderson in Making a Machine Learning Model Forget About You.
In the case of deceased people, to focus on “Identity” is most important. If data is used to “reactivate” a deceased person without her/his consent, we talk about usurpation of identity. Beyond being unethical in the case of a deceased, it is illegal and should be sanctioned.
Cyber immortality or digital Spiritsm
Nihil nove sub sole – Nothing new under the Sun…
When you scroll at the Père Lachaise in Paris, you have certain “rockstar” graves. One of the most visited, since 1971 has certainly been Jim Morrison’s.
But, next to the entrance, a strange spectacle can be observed. You can see people, with closed eyes, balancing from one foot to the other and reciting words while holding on to a large granite dolmen topped by a bronze bust. It is the thomb of Allan Kardec, the founder of spiritism. The legend holds that communicating with Kardec can grant wishes…
Kardec was born Hyppolite Rivail, in 1804 in Lyon. But in Brazil, he was a real star. Today still 20 million Brazilians are adept spiritists.
Spiritism, as defined by Kardec, is the belief that a spirit of a deceased person can communicate with the living through a medium.
In France, the most famous follower of 19th century spiritism was the poet Victor Hugo. In his exile on Jersey island he held his famous spiritist sessions at his house, Marine Terrace. The legendary “Tables tournantes” or “Spinning tables” were held from September 1853 to October 1855.
During many séances Hugo got into contact with his deceased daughter Leopoldine. But also with Napoléon 1st, Chateaubriand, Dante, Racine, Shakespeare, Luther, Eschyle, Molière, Aristotle, Galileo, Plato, Louis XVI, Mahomet and Jesus Christ.
In the famous play Blithe Spirit (1941) by Noel Coward, the ghost of a deceased wife, Elvira, returns back after seven years of silence, with the help of spititist enchantress Madame Arcati. Being a mischievously funny spirit, Elvira tries to crash the current mariage of her husband and even tries to kill him. It is a delightful farce. No AI turned bad.
In a digital context of online dating, digital therapeutics, companion robots, and various thanatechnologies, frames need to be discussed and set.
Microsoft and the AI chatbots of deceased
On December 1st, 2020, Microsoft had a new patent published entitled “Creating a conversational Chatbot of a specific Person”. So far nothing extraordinary with this patent deposed at the United States Patent and Trademark Office.
The 19-page document gets more and more revealing as one keeps on reading.
Basically it is about
“creating a conversational chatbot modeled after a specific person — a “past or present entity … such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure”.
Without ever mentioning the words “deceased” nor “dead” explicitly. The wording turns around past and present, active and passive.
Even the least imaginative person can figure out where this “chatbot” is heading. It is thanatechnology at its purest.
Tim O’Brien, Microsoft’s general manager of AI programs, “confirmed that there’s no plan for this.” and acknoledged that internet users found the project “disturbing.”
Antecedents with out-of-control chatbots
We all remember the disaster that happened back in 2016. Twitter-Bot Tay introduced by Microsoft and switched off 24 hours later.
It sounded all shinny and bright at the beginning.
The more you chat with Tay, the smarter it gets, learning to engage people through “casual and playful conversation.” – Microsoft
Faced with reality, and fed by real people talk, the AI chatbot turned out to be a three-fold catastrophy. Technical. Logical. Ethical.
Tay became racist, mysogynist and hailed Hitler.
The same questions raised 5 years ago, are still relevant and unsolved today.
- if we train models with open data, how can we prevent them from reflecting the worst behaviors while online?
- if bots are created to mirror users’ needs, what if they spin out of control?
- if bots take the lead with mentally fragile people, who will be accountable in case of an accident or death?
- what long-term effects will continuous illusions, false identities, fake people have on our human brain, behavior, interactions, values?
Imagine Hitler calling for a Forth Reich. Imagine Oppenheimer, the father of the atomic bomb” conversing with out-of-control minds.
Two words like “Do it.” can suffice to commit the irreparable.
What if the AI chatbot-ghost Jessica told grieving Joshua “Come and meet me. I miss you so much”?
This cannot happen, you might think. Well, the Michelle Carter case sadly proved otherwise.
The Michelle Carter case
In 2014, the 17-year-old Michelle Carter was driving her boyfriend into suicide via text message. Carter was convicted of involuntary manslaughter in 2017.
“Sitting in his pickup truck one summer day in 2014, Conrad Roy III wavered about his plan to kill himself. He got out of his vehicle and texted his girlfriend that he was scared. (court report).”
“Get back in,” she replied.
Roy did.
The 18-year-old boy poisoned himself with carbon monoxide in his truck parked in a Kmart parking lot in Fairhaven, Mass.
He had long battled depression and suicidal thoughts.
The article “Her texts pushed him to suicide, prosecutors say. But does that mean she killed him?” by The Washington Post deals with the pivotal question about accountability.
The real-life tragedy inspired a HBO documentary titled “I Love You, Now Die.”
The intellectual jump to an AI chatbot processing data and spitting out suggestive, autoritarian or poetic words to a mentally fragile person is minimal.
The more surprising is the coup de théâtre by convicted Michelle Carter’s lawyers : asking the Supreme Court to call her suicidal suggestion free speech.
Ethical concerns and beyond
Usurping identities without consent, manipulating their persona, be it historical figures, personal friends, family or acquaintances should be prohibited.
Who would consent to have an AI identity potentially fed with racist, hateful or plain stupid data?
Are they considered as deepfakes even though they are based on real people, and thus considered righ-risk according to the 2021 proposal for European AI regulation?
From a data ethics perspective it is equally outragious : using personal data without the person’s consent (since dead), and processed by obscure ML models trained on extorted open data.
Extorting massively data on social media should be a key concern. As stated in the Microsoft patent, one is astonished at the dimension of unethical and illegal Data Scraping. The AI chatbot is trained on
social data” such as images, social media posts, messages, voice data and written letters from the chosen individual. That data would be used to train a chatbot to “converse and interact in the personality of the specific person.” It could also rely on outside data sources, in case the user asked a question of the bot that couldn’t be answered based on the person’s social data. (source: Microsoft patent)
In a context of data protection it is highly disturbing. If one adds the context of deceased or sick or mentally disabled people it becomes outrageously dispiteful.
Say you want your healthy parents back, without Alzheimer or dementia, or see your sick child fit, you simply create a “healthy” chatbot version and leave the other one for your own well-being to the care bots. Already this way of thinking or writing is painful.
The endless business of immortality
The idea of a new kind of juicy and unethical business is easily imagined around the immortal immortality. By monetizing data or merchandising the universe of the dead via deeper communication programs or virtual reality upgrades, even robotic android versions with special skin and features allowing more spohisticated interactions.
Just look at Ray Kurzweil’s cash machine built around longevity and life extension. No need for science-fiction authors to envision what an endless business cycle around cyber immortality could look like. (www.lifeextension.com, www.transcend.me). You can even spin for a discount!
With Metaverse waiting around the corner, we certainly need to deal with this issue. Deceased identities, our avatars, new avatars, robotic identities… Fiction and reality merges into One.
But what about us?
Beyond obvious ethical, legal, business, societal concerns, the emotional impact might be the most harmful. That’s at least what the grieving Joshua confided in when asked about the “Jessica simulation”.
“Intellectually I understood that she was dead but emotionally it was hard to deal with.” (Joshua)
When cyber immortality creates real life problems.
When the solution creates even bigger problems.
So, can data die? Or should specific data die? Or even, in some cases, must some data die?
- Founder HOUSE OF ETHICS
- Author's Posts
Katja Rausch is specialized in the ethics of new technologies, and is working on ethical decisions applied to artificial intelligence, data ethics, machine-human interfaces and Business ethics.
For over 12 years, Katja Rausch has been teaching Information Systems at the Master 2 in Logistics, Marketing & Distribution at the Sorbonne and for 4 years Data Ethics at the Master of Data Analytics at the Paris School of Business.
Katja is a linguist and specialist of 19th century literature (Sorbonne University). She also holds a diploma in marketing, audio-visual and publishing from the Sorbonne and a MBA in leadership from the A.B. Freeman School of Business in New Orleans. In New York, she had been working for 4 years with Booz Allen & Hamilton, management consulting. Back in Europe, she became strategic director for an IT company in Paris where she advised, among others, Cartier, Nestlé France, Lafuma and Intermarché.
Author of 6 books, with the latest being “Serendipity or Algorithm” (2019, Karà éditions). Above all, she appreciates polite, intelligent and fun people.
-
The proposed concept of swarm ethics evolves around three pilars : behavior, collectivity and purpose
Away from cognitive jugdmental-based ethics to a new form of collective ethics driven by purpose.