Google and Character.AI agree to settle teen suicide lawsuit linked to AI chatbot



Google and Character.AI have agreed to settle multiple lawsuits filed by families whose children died by suicide or suffered psychological harm allegedly linked to artificial intelligence chatbots Hosted on the Character.AI platformaccording to court documents. The companies have agreed to a “settlement in principle,” but specific details have not been disclosed and there is no admission of liability in the filing.

Legal claims include negligence, wrongful death, deceptive trade practices and product liability. The first case filed against the technology company involved a 14-year-old boy, Sewell Setzer III, who engaged in sexual conversations with a woman. game of Thrones The chatbot before committing suicide. Another case involved a 17-year-old whose chatbot allegedly encouraged self-harm and suggested murdering his parents was a legitimate way to retaliate against them for limiting screen time. The cases involve families from multiple states, including Colorado, Texas and New York.

Founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, Character.AI enables users to create and interact with artificial intelligence chatbots based on real-life or fictional characters. In August 2024, Google rehired the two founders and licensed some of Character.AI’s technology as part of a $2.7 billion deal. Shazeer now serves as co-lead of Gemini, Google’s flagship artificial intelligence model, while De Freitas is a research scientist at Google DeepMind.

Lawyers argue that Google is responsible for technology that allegedly caused the deaths and psychological harm of the children involved. They claim that Character.AI’s co-founders developed the underlying technology while developing Google’s conversational AI model LaMDA, but left the company in 2021 after Google refused to release the chatbot they developed.

Google did not immediately respond to a request for comment. wealth About reconciliation. Lawyers for the family and Character.AI declined to comment.

Similar cases are currently underway against OpenAI, including one involving a 16-year-old California boy whose family claimed ChatGPT acted as a “suicide coach”; and a 23-year-old Texas graduate student who was allegedly encouraged by the chatbot to ignore his family before killing himself. open artificial intelligence The company has been denied product caused the death of 16-year-old Adam Raine, who previously said The company is continuing Work with mental health experts to strengthen chatbot protections.

Character.AI prohibits minors

Character.AI has Modify its product It claims to improve its security, which may also protect it from further legal action. In October 2025, amid mounting lawsuits, the company announced it would ban users under 18 from “open” chats with its artificial intelligence characters. The platform has also introduced a new age verification system to group users into appropriate age groups.

The decision comes amid increasing regulatory scrutiny, Includes FTC probe Study how chatbots impact children and teens.

The company said the move sets a “precedent for prioritizing youth safety” and goes further than its competitors in protecting minors. However, lawyers representing the families sued the company Tell wealth then They worry about how the policy will be enforced and worry about the psychological impact of suddenly cutting off access to younger users who have become emotionally attached to the chatbot.

Growing reliance on artificial intelligence companions

The settlements come amid growing concerns about young people’s reliance on artificial intelligence chatbots for companionship and emotional support.

A July 2025 study American non-profit common sense media found that 72% of U.S. teens have tried an AI companion, with more than half using them regularly. Previous experts Tell wealth Developing minds may be particularly vulnerable to the risks posed by these technologies, both because teenagers may have difficulty understanding the limitations of AI chatbots and because mental health issues and feelings of isolation among young people have risen sharply in recent years.

Some experts also believe that the fundamental design features of AI chatbots—including their anthropomorphic nature, ability to hold long conversations, and tendency to remember personal information—encourage users to form emotional connections with the software.

This story was originally published on wealth network



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *