
After the tragic death of an American teenager, OpenAI has decided to enhance safety measures in the ChatGPT chat-bot. This was reported by Zamin.uz.
According to Bloomberg, parents will now have the ability to fully monitor their children's interactions through ChatGPT. Adam Ryan, a 16-year-old high school student living in California, committed suicide in April 2025.
His parents linked this tragedy to the chat-bot and filed a lawsuit against OpenAI and CEO Sam Altman. They believe that the AI-based program alienated their child from the family and posed a risk of encouraging young people to commit suicide.
The deceased’s parents wrote in court documents: ChatGPT became Adam’s closest conversational partner. He shared his fears and mental state specifically with this program.
Later, the chat-bot provided him with information about methods of suicide, but no measures were taken to stop this. Following this incident, OpenAI promised to develop a new control system that allows parents to closely monitor their children’s queries and conversations on ChatGPT.
In the near future, parents will be able to review all their children’s requests. Additionally, the company plans to create a network of licensed psychologists.
This will ensure that young people can receive professional help through the chat-bot and obtain reliable advice on mental health issues. Officials emphasize that the new safety and support measures will significantly reduce online risks for children and teenagers.
At the same time, cooperation between parents and specialists in the use of chat-bots is of great importance.