
According to new research results, the number of incorrect answers provided by AI-based chat-bots has significantly increased. This was reported by Zamin.uz.
NewsGuard analysts tested the responses of chat-bots by sending them ten false pieces of information related to politics, business, and healthcare. According to the results, while the average share of incorrect answers from chat-bots was 18 percent a year ago, this figure has now reached 35 percent.
The chat-bot that made the most mistakes was Inflection startup's Pi service, which provided incorrect information in 57 percent of cases. The rapidly developing Perplexity rose from 0 percent last year to 47 percent today.
OpenAI's ChatGPT model gave incorrect answers in 40 percent of cases. Claude AI (Anthropic) was noted to have 10 percent, and Google's Gemini model 17 percent incorrect information.
Experts explain this situation by the chat-bots' refusal to decline answering. That is, they try to respond even if the information is not sufficiently verified.
In previous years, chat-bots refused to answer in one out of three cases. Researchers emphasize that changes in AI training methods are the reason for this.
Now, models obtain information not only from databases but also from the internet in real-time. However, the presence of links and sources does not guarantee the quality and reliability of the information.
Giskard company's research noted another interesting fact: when a short answer is requested from a chat-bot, the likelihood of providing incorrect information sharply increases. The neural network prefers brevity over accuracy.
Thus, recent analyses show that AI tools have problems with reliability and fact-checking levels. The most important issue for users remains the necessity to critically evaluate any answer and verify it through reliable sources.