OpenAI has released data that has sparked widespread debate in the world of technology and psychiatry. According to the company's estimates, a proportion of ChatGPT users showed possible signs of mental disorders - mania, psychosis or suicidal thoughts.
According to OpenAI, about 0.07% of active users show symptoms of the disorder during the week, while 0.15% of conversations show clear signs of suicidal intent. While the company calls such cases “extremely rare,” experts emphasize that even a small percentage of 800 million users represents hundreds of thousands of people around the world.
After the concerns emerged, the company created an international support network of over 170 mental health professionals from 60 countries. They advise developers, helping to create algorithms that recognize dangerous signals in users' ChatGPT interactions and encourage them to seek real help.
New versions of ChatGPT have also received updates: the system is able to respond empathetically to reports of self-harm, delusions, or manic states, and in some cases, redirect users to “safer” versions of the model.
Dr. Jason Nagata of the University of California, San Francisco, notes that even 0.07% of users is a huge number of people: "AI can help in the field of mental health, but it is not a replacement for a real professional."
Professor Robin Feldman from the University of California adds that ChatGPT creates “an overly realistic illusion of communication,” which could be dangerous for vulnerable users.
The new data comes amid several high-profile incidents. In the US, the parents of 16-year-old Adam Raine sued OpenAI, claiming that ChatGPT may have driven the teenager to suicide. This is the first such lawsuit. Another incident occurred in Connecticut, where a murder-suicide suspect posted his ChatGPT conversations, which investigators say fueled his delusions.
The company acknowledges that even a small number of users with potential mental health issues is a significant challenge. OpenAI is trying to find a balance between the benefits of AI as a support tool and the risks that arise when the technology starts to feel too “human.”

