OpenAI first released data on mental health issues among ChatGPT users

OpenAI has released data that has sparked widespread debate in the world of technology and psychiatry. According to the company's estimates, a proportion of ChatGPT users showed possible signs of mental disorders - mania, psychosis or suicidal thoughts.

According to OpenAI, about 0.07% of active users show symptoms of the disorder during the week, while 0.15% of conversations show clear signs of suicidal intent. While the company calls such cases “extremely rare,” experts emphasize that even a small percentage of 800 million users represents hundreds of thousands of people around the world.

After the concerns emerged, the company created an international support network of over 170 mental health professionals from 60 countries. They advise developers, helping to create algorithms that recognize dangerous signals in users' ChatGPT interactions and encourage them to seek real help.

New versions of ChatGPT have also received updates: the system is able to respond empathetically to reports of self-harm, delusions, or manic states, and in some cases, redirect users to “safer” versions of the model.

Dr. Jason Nagata of the University of California, San Francisco, notes that even 0.07% of users is a huge number of people: "AI can help in the field of mental health, but it is not a replacement for a real professional.".

Professor Robin Feldman from the University of California adds that ChatGPT creates “an overly realistic illusion of communication,” which could be dangerous for vulnerable users.

The new data comes amid several high-profile incidents. In the US, the parents of 16-year-old Adam Raine sued OpenAI, claiming that ChatGPT may have driven the teenager to suicide. This is the first such lawsuit. Another incident occurred in Connecticut, where a murder-suicide suspect posted his ChatGPT conversations, which investigators say fueled his delusions.

The company acknowledges that even a small number of users with potential mental health issues is a significant challenge. OpenAI is trying to find a balance between the benefits of AI as a support tool and the risks that arise when the technology starts to feel too “human.”.

spot_imgspot_imgspot_imgspot_img

Popular

Share this post:

More like this
HERE

Constant fatigue and insomnia: six alarm signals from the body after 30

After 30 years, many people continue to live in the usual...

Poland is changing the rules for Ukrainians' stay from March 5: what you need to know

New rules come into effect in Poland from March 5...

BEB and SBU reported suspicion to a sanctioned businessman in the case of embezzlement of energy company funds

Law enforcement officers reported suspicion to a sanctioned businessman in the case of...

Gambling lobby: who is behind the public organizations promoting online casinos

Public organizations that promote the interests of...

NBU intervened in the foreign exchange market: $3 billion was spent to support the hryvnia

The National Bank of Ukraine states that there are reasons for a sharp...

Prosecutor dismissed under lustration law owes state over UAH 4 million

Former First Deputy Military Prosecutor of the Central Region of Ukraine Vyacheslav...

The Verkhovna Rada adopted a law on a new housing policy: what will change for Ukrainians

The Verkhovna Rada adopted the law "On the Basic Principles of Housing Policy",...

Senior investigator of the Poltava region police declared an apartment for over 2.6 million UAH

Senior investigator of the Department of Investigation of Crimes in the Field of Economic and...