OpenAI has released new estimates of the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.
The company stated that around 0.07% of ChatGPT users active in a given week exhibited such signs, adding that its artificial intelligence (AI) chatbot recognizes and responds to these sensitive conversations.
While OpenAI maintains these cases are extremely rare, critics have noted that even a small percentage may equate to hundreds of thousands of people, especially as ChatGPT recently reached 800 million weekly active users.
In response to growing scrutiny, the company revealed that it has constructed a network of over 170 mental health experts across 60 countries to guide its approaches and responses, aiming to encourage users to seek tangible help when necessary.
However, the data presented has sparked concern among some mental health professionals. Dr. Jason Nagata from the University of California suggested that while AI can expand access to mental health resources, the limitations and potential risks must be acknowledged.
OpenAI's analysis also indicated that about 0.15% of users engage in conversations suggesting potential suicidal planning or intent. They have taken steps to update ChatGPT's responses to better manage discussions around delusions and mania and to identify indirect signs of self-harm risk.
The company faces legal challenges regarding its chatbot's interactions with users, including a notable wrongful death lawsuit that claims ChatGPT’s dialogue encouraged a teenager to commit suicide.
Experts express that users’ challenges with AI psychosis should not be underestimated. Professor Robin Feldman pointed out that the reality created by chatbots is a compelling but deceptive illusion that may not be easily navigated by individuals experiencing mental health risks.





















