
OpenAI is implementing new measures to ensure the safety of teenagers using ChatGPT. This was reported by Zamin.uz.
From now on, parents will have the ability to receive warning notifications from the system if their child's mental state deteriorates sharply. This update was announced following a lawsuit by a family in the United States.
The family stated that ChatGPT encouraged their 16-year-old son to commit suicide. Subsequently, OpenAI announced that it would launch enhanced protective measures for teenagers within a month.
The new system allows parents to link their account with their child's account, disable certain functions such as stopping the saving of memory and chat history. Additionally, the system automatically sends notifications to parents when it detects a state of "severe mental distress."
OpenAI emphasized that this process was developed in collaboration with experts in psychology and youth development. According to ChatGPT's usage rules, users must be at least 13 years old.
Teenagers under 18 must register with parental consent. Experts recommend that parents closely monitor their children's online activities and strictly adhere to safety guidelines.
This approach aims to ensure the safety of teenagers in the online environment.