OpenAI first released data on mental health issues among ChatGPT users

OpenAI has released data that has sparked widespread debate in the world of technology and psychiatry. According to the company's estimates, a proportion of ChatGPT users showed possible signs of mental disorders - mania, psychosis or suicidal thoughts.

According to OpenAI, about 0.07% of active users show symptoms of the disorder during the week, while 0.15% of conversations show clear signs of suicidal intent. While the company calls such cases “extremely rare,” experts emphasize that even a small percentage of 800 million users represents hundreds of thousands of people around the world.

After the concerns emerged, the company created an international support network of over 170 mental health professionals from 60 countries. They advise developers, helping to create algorithms that recognize dangerous signals in users' ChatGPT interactions and encourage them to seek real help.

New versions of ChatGPT have also received updates: the system is able to respond empathetically to reports of self-harm, delusions, or manic states, and in some cases, redirect users to “safer” versions of the model.

Dr. Jason Nagata of the University of California, San Francisco, notes that even 0.07% of users is a huge number of people: "AI can help in the field of mental health, but it is not a replacement for a real professional.".

Professor Robin Feldman from the University of California adds that ChatGPT creates “an overly realistic illusion of communication,” which could be dangerous for vulnerable users.

The new data comes amid several high-profile incidents. In the US, the parents of 16-year-old Adam Raine sued OpenAI, claiming that ChatGPT may have driven the teenager to suicide. This is the first such lawsuit. Another incident occurred in Connecticut, where a murder-suicide suspect posted his ChatGPT conversations, which investigators say fueled his delusions.

The company acknowledges that even a small number of users with potential mental health issues is a significant challenge. OpenAI is trying to find a balance between the benefits of AI as a support tool and the risks that arise when the technology starts to feel too “human.”.

spot_imgspot_imgspot_imgspot_img

Popular

Share this post:

More like this
HERE

Russian drones hit port infrastructure in the Odessa region

Last night, Russian troops carried out another attack...

Kremlin's statements about Luhansk region called part of information war

The Russian Ministry of Defense once again announced the alleged complete capture of...

What holidays are celebrated on April 2 in Ukraine and the world?

April 2nd combines several important events at once - like...

What to drink in the evening to lose weight more easily and sleep better

Despite numerous diets and workouts, it is the fat in the area...

Ukraine is preparing to raise prices for higher education in 2026

The cost of education in Ukraine will continue to increase in 2026...

Trukhanov's former deputy Vugelman declared half a million in gifts and millions in income

Former Deputy Mayor of Odessa Pavlo Vugelman published a declaration...

A man who helped the Russian Federation with access to Starlink was detained in the Kyiv region

The Security Service of Ukraine exposed and detained in the Kyiv region...

After the death of TV presenter Nelipa, a conflict arose over money for a monument

A story honoring the memory of TV presenter Maksym Nelipa, who died...