Grok AI criticized for insulting football players

The Grok artificial intelligence tool on Elon Musk's social network "X" has faced serious criticism. This was reported by Zamin.uz.
This program attracted the attention of the UK government because it allowed users to generate insulting and inappropriate information about famous tragedies in the football world, including the Munich air disaster as well as the Hillsborough and Heysel tragedies. Additionally, false and insulting messages related to Liverpool striker Diogo Jota also caused public dissatisfaction.
This was reported by goal.com. Prestigious football clubs such as "Manchester United" and "Liverpool" have filed official complaints regarding this matter.
Representatives of the UK government and the Department for Science, Innovation and Technology described the posts distributed by Grok as "disgusting and irresponsible." Officials emphasized that such content is completely contrary to the values and moral standards of the UK.
According to the "Online Safety" law, artificial intelligence services must prevent the spread of hateful and insulting material. The Grok system defended itself, stating that these responses were generated based on users' specific queries and that there is no additional control in the system.
However, the regulatory body Ofcom warned that companies failing to comply with the rules could face serious legal consequences. Currently, the "X" platform is conducting an internal investigation regarding this matter.
Experts are calling on Elon Musk to be more responsible in controlling harmful activities on his platform.





