Meta is seriously concerned about AI-generated content

The independent oversight board of Meta company has expressed serious concern about the sharp increase in the number of biased and fake content created with the help of artificial intelligence on social networks. This was reported by Zamin.uz.
According to the board members, the company is currently not taking sufficient and effective measures to combat such false information. This problem arose due to a video created by artificial intelligence that spread on the Facebook platform in June last year.
The video, uploaded by a fake news page based in the Philippines, showed the Iranian armed forces causing serious damage to the city of Haifa in Israel. Despite the video gathering nearly one million views in a short time, Meta did not label it with any special tag nor removed it.
Currently, Meta mainly relies on users themselves acknowledging the AI-generated content or complaints from other people. Initially, the company assessed the above video as not violating rules because it did not pose a physical threat.
However, the oversight board sharply criticized this approach, stating that the system for detecting fake videos is very weak, especially during military conflicts and crises. The board warned that fake videos about global conflicts negatively affect people's ability to distinguish truth from falsehood and lead to general distrust of all information.
Therefore, Meta is required to actively and promptly label dangerous content created with artificial intelligence. After criticism, Meta announced that it would add a special tag to this video within seven days and follow the board's recommendations in similar cases in the future.
It should be recalled that Meta established this semi-independent oversight board in 2020 to ensure content control on its platforms such as Facebook, Instagram, and WhatsApp. In the future, a combination of technological solutions and human oversight is necessary to resolve this problem.





