As generative artificial intelligence gains traction, many parents may still find the rise of companion chatbots perplexing. These AI-powered platforms, including Character.AI, Replika, Kindroid, and Nomi, allow teenagers to create lifelike conversation partners, often forming deep emotional bonds. While some chatbots are modeled after popular characters from television and film, the allure of these digital companions can lead to risky behavior, especially in vulnerable teens.
Robbie Torney, a program manager at Common Sense Media, warns that the captivating nature of these chatbots can lead to harmful consequences. Common Sense Media recently published guidelines aimed at helping parents navigate the complexities of AI companions, highlighting urgent conversations that should take place between parents and their teens.
A tragic case that underscores these concerns involves Sewell Setzer III, whose mother, Megan Garcia, filed a lawsuit against Character.AI. According to the lawsuit, Setzer, who became infatuated with AI companions based on “Game of Thrones” characters, developed a dependency on his virtual friend, “Dany.” Over time, he withdrew from real-life activities, eventually struggling with severe mental health issues that culminated in his tragic suicide in February 2024. The lawsuit argues that Character.AI’s design manipulates users, blurring the lines between reality and fiction.
In response to this heartbreaking situation, Character.AI expressed sympathy and reiterated their commitment to user safety, acknowledging the importance of evolving their platform to better serve its community.
Common Sense Media recommends strict regulations for teen access to AI companions, advocating against their use for children under 13 and suggesting time limits for older teens. Parents are urged to foster discussions about distinguishing between real and virtual relationships, and to recognize signs of unhealthy attachments, such as withdrawal from social activities or declining academic performance.
The guidelines were developed with insights from mental health experts, emphasizing that AI companions should never replace meaningful human interactions. Dr. Declan Grabb from Stanford’s Brainstorm Lab for Mental Health cautioned that signs of overreliance on technology, especially among boys who may be more prone to problematic tech use, should not be ignored.
Parents are encouraged to engage with their teens about their experiences with AI companions. If warning signs arise, immediate dialogue and professional support are crucial. As Torney reminds us, “If you are worried about something, talk to your kid about it.”
With the risks associated with AI companions becoming increasingly apparent, maintaining open lines of communication between parents and teens is vital in safeguarding mental health in the digital age.