OpenAI first released data on mental health issues among ChatGPT users

OpenAI has released data that has sparked widespread debate in the world of technology and psychiatry. According to the company's estimates, a proportion of ChatGPT users showed possible signs of mental disorders - mania, psychosis or suicidal thoughts.

According to OpenAI, about 0.07% of active users show symptoms of the disorder during the week, while 0.15% of conversations show clear signs of suicidal intent. While the company calls such cases “extremely rare,” experts emphasize that even a small percentage of 800 million users represents hundreds of thousands of people around the world.

After the concerns emerged, the company created an international support network of over 170 mental health professionals from 60 countries. They advise developers, helping to create algorithms that recognize dangerous signals in users' ChatGPT interactions and encourage them to seek real help.

New versions of ChatGPT have also received updates: the system is able to respond empathetically to reports of self-harm, delusions, or manic states, and in some cases, redirect users to “safer” versions of the model.

Dr. Jason Nagata of the University of California, San Francisco, notes that even 0.07% of users is a huge number of people: "AI can help in the field of mental health, but it is not a replacement for a real professional."

Professor Robin Feldman from the University of California adds that ChatGPT creates “an overly realistic illusion of communication,” which could be dangerous for vulnerable users.

The new data comes amid several high-profile incidents. In the US, the parents of 16-year-old Adam Raine sued OpenAI, claiming that ChatGPT may have driven the teenager to suicide. This is the first such lawsuit. Another incident occurred in Connecticut, where a murder-suicide suspect posted his ChatGPT conversations, which investigators say fueled his delusions.

The company acknowledges that even a small number of users with potential mental health issues is a significant challenge. OpenAI is trying to find a balance between the benefits of AI as a support tool and the risks that arise when the technology starts to feel too “human.”

spot_imgspot_imgspot_imgspot_img

popular

Share this post:

More like this
HERE

Personnel reshuffles at Gas Distribution Networks of Ukraine: who gained control over key areas

After Oleksiy Kalyna joined the management of Gas Distribution Networks of Ukraine...

Traditions of Saint Philemon's Day: What You Can and Shouldn't Do on November 22

On November 22, Orthodox believers commemorate the Apostle Philemon -...

Water and sugar: how a liter of pure liquid per day reduces the risk of hyperglycemia

The habit of drinking water regularly seems so commonplace that it...

Real estate, cars and even an airplane: what does the head of the Poltava Region BEB, Oleg Pakhnits, own?

Oleg Pakhnits, who heads the Territorial Department of BEB in Poltava...

Estonia provides Ukrainians with up to UAH 26,000 in assistance: who can receive payments

Ukrainian rural households affected by Russian aggression may...

Prosecutor couple from Poltava region received preferential pensions at age 40 and continue to work

In the Poltava region, the couple of prosecutors Myronov Andriy Vasylyovych and Palyonna...

Actress Natalka Denysenko and Yuriy Savransky “burned” in the same hotel in Odessa

Actress Natalka Denysenko and her lover Yuriy Savransky, relationship...

The chief of Uman police declared an apartment in an elite residential complex at a price ten times lower than the market price

Head of the Uman District Police Department of the Cherkasy Region Oleksandr Gnedov...