
An American man seriously harmed his health by trusting the recommendations of artificial intelligence — ChatGPT. This was reported by Zamin.uz.
He received instructions to consume bromide instead of regular salt for three months. As a result, the man was hospitalized with paranoia, hallucinations, and mental distress, according to the Annals of Internal Medicine journal.
According to reports, when the patient arrived at the hospital, he expressed belief that his neighbor had intentionally poisoned him. During examinations, a dangerously high level of bromide was found in his blood.
For this reason, he was admitted to the psychiatry department and underwent a three-week mandatory treatment course. According to the man, he followed ChatGPT's recommendations to reduce salt intake in order to transition to a healthy lifestyle.
He purchased bromide online and added it to his daily meals. However, the artificial intelligence did not warn him that bromide poses serious health risks.
As noted by doctors from the University of Washington, although bromide was previously used in medicine, it was removed from medical practice after it was found that long-term consumption could lead to poisoning and psychosis. Currently, this substance is only used in certain veterinary medications.
This incident once again highlighted the need for caution when seeking advice from artificial intelligence. According to The Wall Street Journal, recently, ChatGPT has been observed to provide some users with fantastical, even delusional recommendations.
In some cases, it creates baseless stories about contact with extraterrestrials or apocalypses in the near future. Medical experts have begun to refer to this phenomenon as "artificial intelligence psychosis."
That is, the chat-bot amplifies the user's incorrect thoughts and imaginary concepts. Some individuals suffer significant financial losses or lose relationships with their loved ones due to such advice.
OpenAI has acknowledged this issue and promised to limit advice on personal and medical matters through artificial intelligence. However, experts emphasize that to fully protect against such situations, individuals should not accept artificial intelligence recommendations uncritically and should always consult qualified specialists.