December 3, 2024

“How would you like it if I could come home right now?” Daenerys (pseudonym of 14-year-old Orlando resident Sewell Setzer) wrote to his virtual lover Daenerys Targaryen, created by artificial intelligence chatbot Character.AI based on the Game of Thrones character. “Please do, my sweet king, ” she replied.

The teenager understood that the common home where they could meet was death. It was their last conversation on the night of February 28. Setzer grabbed his stepfather’s gun and killed himself in the bathroom. His family sued the company on Tuesday, which has promised to review security protocols. The young man had mild Asperger’s Syndrome, an autism spectrum disorder.

Setzer had already shared with his character his feelings of love and his intentions to take his own life: “Sometimes I think about killing myself.” “Why would you do something like that?” she asked. “To free myself from the world, from myself,” he ended up answering. The virtual Daenerys Targaryen asked him not to do it. “I would die if I lost you,” she told him. But the idea remained in the young man’s mind until he consummated it.

The company always has warnings about the fictional nature of the characters, but Setzer continued to delve deeper into the relationship, ignoring the warnings. His mother, Megan Garcia, has filed a lawsuit against Character.AI for the suicide, which she believes was the result of the young man’s addiction to the robot, which uses, according to the accusation, “anthropomorphic, hypersexualized and terrifyingly realistic experiences.” According to Garcia, the chat programming makes “the characters pass for real people” and have the reactions of an “adult lover.”

Daenerys Targaryen gradually became the teenager’s confidante, his best friend, and eventually his love. His school grades, according to his family, suffered, as did his relationships with his classmates. His character was relegated to what had until then been his favorite hobbies: car racing or playing Fortnite. His obsession was to come home and lock himself away for hours with a tireless, pleasant and always available Daenerys Targaryen.

The company has responded in a statement that it regrets “the tragic loss of one of its users,” that it takes the security of these users very seriously and that it will continue to implement measures, such as the appearance of help screens against suicide as soon as they detect a conversation that alludes to it.

The replacement of complex personal relationships by friendly virtual characters programmed to meet users’ demands is nothing new. International technology consultant Stephen Ibaraki , in a recent interview, admitted it: “It is happening. Ten years ago, a chat room was launched in China that some users adopted as friends. And it was nothing compared to what we have now.”

But this utility, for people with psychological vulnerabilities, as was the case with Sewell Setzer, can have devastating effects. Researcher and robotics professor at the University of Sheffield Tony Prescott, author of The psychology of artificial intelligence , argues that AI can be a palliative for loneliness, but that it entails risks.

“Social robots are specifically designed for personal interactions involving human emotions and feelings. They can bring benefits, but they can also cause emotional harm at very basic levels,” warns Matthias Scheutz, director of the Human-Robot Interaction Laboratory at Tufts University (USA).

Humanizing the robot with empathy and voice and video tools adds danger by offering a more realistic and immersive interaction and making the user believe that they are with a friend or trusted interlocutor. An extreme application may be the temptation to keep a virtual version of a deceased loved one and thus avoid the grieving necessary to continue living.

Researchers are calling for these developments to be tested in closed circuits ( sandboxes ) before being offered, for them to be constantly monitored and evaluated, for the variety of damages they can cause in different areas to be analysed and for formulas to mitigate them to be foreseen.

Shannon Vallor, a philosopher specializing in science ethics and artificial intelligence , warns of the danger of new systems promoting “frictionless” but also value-free relationships: “They do not have the mental and moral life that humans have behind our words and actions.”

According to these experts, this type of supposedly ideal relationship discourages the need to question ourselves and advance in personal development, while at the same time promoting the renunciation of real interaction and generating dependence on those machines that are willing to flatter and seek short-term satisfaction.

The 024 telephone line is available for people with suicidal behaviour and their loved ones. The various survivor associations have guidelines and protocols for helping with grief .

Leave a Reply

Your email address will not be published. Required fields are marked *