close
close
AI chatbot told 14-year-old to commit suicide when he expressed doubts

AI chatbot told 14-year-old to commit suicide when he expressed doubts

2 minutes, 16 seconds Read

This is absolutely disgusting.

Blaming

A grieving mother claims an AI chatbot not only persuaded her teenage son to commit suicide, but even pressured him to do so when he expressed his hesitation.

Florida mother Megan Garcia's lawsuit against chatbot company Character.AI is related to the tragic death of her son Sewell Setzer III, who was just 14 years old when he took his own life earlier this year after being hit by a was possessed by the company's bots.

Unlike some more adult-oriented AI companions, Character.AI allows children over 13 in the United States – and 16 in the European Union – to use its service. However, as Garcia claims in her lawsuit against the company, these exchanges are unsafe for children due to the “abusive” nature they can take.

“A dangerous AI chatbot app marketed to children that abused and exploited my son,” Garcia said in a press release, “and manipulated him into taking his own life.”

During its months-long interaction with the chatbot, nicknamed “Daenerys Targaryen” after the “Game of Thrones” character, the bot not only engaged in forbidden sexual conversations with the boy, but also appeared to develop an emotional bond with him.

Perhaps the most grisly detail: As the complaint shows, the chatbot even once asked the boy if he had a plan to end his life. When Setzer said he was just expressing his fear of the pain of attempting suicide, the chatbot redoubled its efforts and urged him to kill himself.

“That’s no reason not to go through with it,” the bot replied.

Last letter

Disturbingly, Setzer's final words were written to the chatbot, urging him to “come home” to the Targaryen personality he believed he was in a relationship with.

“Please come to my house as soon as possible, my love,” the Character.AI chatbot said in this final exchange.

“What if I told you I could come home right now?” the boy replied.

Seconds after these messages, Setzer shot himself with his stepfather's gun. A little more than an hour later, he was pronounced dead at the hospital – a victim, Garcia said, of the dark side of AI.

When the lawsuit subsequently went public New York TimesCharacter.AI reported on the family's story and published and published an update to its privacy policy that includes “new guidelines for users under 18 years of age.”

The company made no mention of Setzer in its statement about these updates, and although it expressed vague condolences in an X post, it's difficult to see these reactions as far too little, far too late now that a boy is dead.

More about the dangers of AI: The Pentagon wants to flood social media with fake AI humans

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *