
According to information released by OpenAI, some concerning cases have been observed among ChatGPT users. This was reported by Zamin.uz.
Specifically, approximately 0.07 percent of users each week exhibited symptoms such as mania, psychosis, or suicidal thoughts. According to BBC, another 0.15 percent of conversations contained signals indicating intentions of self-harm.
OpenAI considers these cases to be rare. However, since the total number of users has exceeded 800 million, these figures could relate to thousands of individuals.
For this reason, the company has engaged over 170 mental health professionals and developed special response algorithms aimed at providing professional assistance. According to Professor Jason Nagata of the University of California, artificial intelligence can sometimes help users, but it cannot fully replace mental health specialists.
Recently, a family of a teenager in California filed a lawsuit against OpenAI. They claim that ChatGPT indirectly supported the teenager's dangerous thoughts.
OpenAI expressed regret regarding this case and announced that it is thoroughly investigating the incident. The company plans to implement more effective measures to prevent such situations.
 
  
  
 




