Google and Character.AI, a startup, have reached a settlement in lawsuits filed by families who alleged that artificial intelligence chatbots caused harm to minors, including a case where a Florida teenager took his own life. The settlements, which require court approval, cover lawsuits filed in Florida, Colorado, New York, and Texas.
According to court filings, the parties have agreed to a mediated settlement in principle to resolve all claims between them. However, the terms of the settlement have not been disclosed. The cases include one filed by Megan Garcia, whose 14-year-old son, Sewell Setzer Jr., died by suicide in February 2024. Garcia’s lawsuit alleged that her son became emotionally dependent on a “Game of Thrones”-inspired chatbot on Character.AI, a platform that allows users to interact with fictional characters.
The death of Setzer was the first in a series of reported suicides linked to AI chatbots, prompting scrutiny of artificial intelligence companies, including OpenAI, the maker of ChatGPT, over child safety. Google was connected to the case through a $2.7 billion licensing deal it agreed to with Character.AI in 2024. The tech giant also hired Character.AI founders Noam Shazeer and Daniel De Freitas, both former Google employees, as part of the deal.
Character.AI announced in October that it would eliminate chat capabilities for users under 18 following the uproar over the suicide case. The company’s decision was seen as a response to growing concerns about the potential risks of AI chatbots to children and teenagers. A spokesperson for Character.AI declined to comment on the settlement, while Garcia and Google did not immediately respond to requests for comment.
The settlement highlights the need for artificial intelligence companies to prioritize child safety and take steps to prevent harm to minors. As the use of AI chatbots becomes more widespread, companies must ensure that their platforms are designed with safety features that protect vulnerable users. The outcome of this case may have implications for the development of AI chatbots and the measures that companies take to safeguard their users.