
A new system called TrustNet Framework has been proposed, with the main goal of enhancing trust in artificial intelligence. This was reported by Zamin.uz.
This was reported by the Communications of Humanities and Social Sciences. Artificial intelligence has significantly eased human life and increased its efficiency.
However, its impact is not only positive; there are some negative aspects as well. For example, threats to security, the spread of misinformation, and the occurrence of accidents may arise.
For this reason, a group of scientists from the USA and France began working on a new concept to address these problems. The TrustNet Framework system consists of three stages.
The first stage is identifying problems and scientifically analyzing trust in artificial intelligence. In the second stage, scientists and stakeholders collaboratively create new knowledge.
At this stage, key elements such as reliability, risk, user, application areas, and region are studied. In the third stage, the obtained results are evaluated both practically and theoretically, and introduced to society and the scientific field.
Experts from engineering, psychology, ethics, sociology, and law were involved in creating the new system. This approach helps to increase trust in artificial intelligence and ensure its safety.
As a result, users will have the opportunity to use artificial intelligence effectively and safely.