Google and Character.AI negotiate first major settlement in teen chatbot death case


In what could mark the tech industry’s first significant legal settlement over AI-related harm, Google and startup Character.AI are negotiating terms with the families of teenagers who died of suicide or self-harm after interacting with their chatbot friend Character.AI. The parties have agreed in principle to establish; now comes the harder work of finalizing the details.

This is one of the first settlements in a lawsuit accusing an AI company of harming users, a legal boundary that OpenAI and Meta have had to watch nervously from the wings as they defend themselves against similar lawsuits.

Character.AI was founded in 2021 by a former Google engineer who return to his former employer in 2024 in a $2.7 billion deal, inviting users to chat with AI personas. The most shocking case is that of Sewell Setzer III, who at the age of 14 had a sexual conversation with the bot “Daenerys Targaryen” before killing himself. His mother, Megan Garcia, told the Senate that companies should be “legally held accountable when they knowingly design dangerous AI technology that kills children.”

Another lawsuit describes a 17-year-old boy whose chatbot encourages self-harm and suggests that killing parents is okay. limit screen time. Character.AI banned minors last October, told TechCrunch. The settlement may include monetary damages, although no liability was accepted in court filings made available Wednesday.

TechCrunch has reached out to both companies.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *