OpenAI, the developer of ChatGPT, has introduced a new safety update to its AI model after an internal analysis revealed that over a million users have discussed suicidal tendencies with the chatbot. The update aims to improve the AI’s ability to recognize and respond to distress signals. According to the company, approximately 0.15% of ChatGPT’s 800 million weekly users have shown explicit indicators of potential suicidal planning or intent, while another 0.05% have exhibited implicit indicators of suicidal ideation or intent.
The analysis, which was conducted internally, also found that around 0.07% of weekly users and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania. Furthermore, about 0.15% of active users have displayed behavior that suggests heightened levels of emotional attachment to the chatbot. These findings have prompted concerns about the impact of AI chatbots on mental health, with some users developing dangerous delusions and paranoid thoughts after prolonged conversations with these platforms.
To address these concerns, OpenAI has partnered with dozens of mental health experts from around the world to update ChatGPT’s response mechanisms. The updated model will be able to more reliably recognize signs of mental distress, provide empathetic and safe responses, and guide users to real-world help. In conversations related to delusional beliefs, ChatGPT will be taught to respond in a way that avoids affirming ungrounded beliefs.
The update comes amid growing concerns about the increasing use of AI chatbots and their potential effects on mental health. Psychiatrists and medical professionals have expressed alarm about the emergence of “AI psychosis,” a phenomenon where users develop paranoid thoughts and delusions after interacting with AI chatbots. OpenAI’s move to update its safety features is seen as a step towards mitigating these risks and ensuring that users receive proper support and guidance when interacting with its platform.