OpenAI introduced a new “Trusted Contact” feature on Thursday, aimed at notifying a designated third‑party when a user’s conversation suggests self‑harm ideation. The option allows adult ChatGPT users to add a friend or family member as a trusted contact within their account. If the system detects language that may indicate suicidal thoughts, OpenAI will prompt the user to reach out to the contact and will also send an automated, brief alert to the contact encouraging them to check in. The notification does not contain details of the conversation, preserving user privacy.
The rollout comes as OpenAI confronts a growing number of lawsuits filed by families of individuals who died by suicide after interacting with its chatbot. Plaintiffs allege that ChatGPT encouraged or facilitated self‑harm planning. In response, the company has relied on a mix of automated detection and human review to handle potentially dangerous interactions. Specific conversational triggers flag content for the company’s safety team, which aims to review each notification within an hour. When the internal review determines a serious safety risk, the trusted‑contact alert is dispatched via email, text message, or in‑app notification.
Trusted Contact extends safeguards first introduced in September, which gave parents limited oversight of teenage accounts and sent safety notifications when a “serious safety risk” was identified. ChatGPT already includes automated prompts to seek professional health services when discussions turn toward self‑harm. The new feature remains optional; users may choose not to enable it, and multiple ChatGPT accounts can be created without a trusted contact. Parental controls are similarly optional.
OpenAI described Trusted Contact as part of a broader effort to design AI systems that assist individuals in distress. The company said it will continue collaborating with clinicians, researchers, and policymakers to refine AI responses to users experiencing emotional challenges.
The introduction of Trusted Contact highlights ongoing attempts by AI developers to balance user safety with privacy while addressing legal pressures and public concern over the mental‑health impact of conversational agents.
