CHAIRMAN: DR. KHALID BIN THANI AL THANI
EDITOR-IN-CHIEF: DR. KHALID MUBARAK AL-SHAFI

Qatar / General

‘Essential to verify safety standards in AI model responses before use’

Published: 11 Aug 2025 - 09:58 am | Last Updated: 11 Aug 2025 - 09:58 am

QNA

Doha: The world is undergoing an unprecedented digital boom in AI-powered technologies, which have started to make their way into a variety of life domains, as well as applied and human sciences. This began with medicine, engineering, industry, and innovation, and has extended to education, languages, literature, philosophy, culture, media, and so forth.

With the concurrent rapid evolution of AI, the available information about AI’s identity has increased, along with its capability to provide users with the required information.

Some folks depend on integrated technological underpinnings to obtain accurate and reliable information, while others may receive less accurate information that often entails hyperbole, dramatically raising concerns or fake expectations.

Amid the interplay between reality and fabrication, it becomes essential to exercise restraint so as to accurately comprehend the reality, evaluate the impact of this technology on societies, their future, and the way people engage with it across various facets of their lives.

Numerous scientific studies indicate that AI is now capable of performing multiple tasks with high efficiency, including analyzing big data at ultra-fast speeds and extracting precise results. It possesses advanced abilities to recognize images and sounds with accuracy that sometimes surpasses human performance, offering intelligent recommendations across various digital platforms.

Highlighting the correct standards in handling AI to ensure the obtainment of accurate information, Principal Software Engineer at the Qatar Computing Research Institute (QCRI), Dr. Hamdy Mubarak told Qatar News Agency (QNA) that the correct utilization of AI primarily requires the veracity of entered information or the information which is being inquired about.

He pointed out that it is highly important to verify the outputs and never accept them as they are without verification, stressing that this information should essentially be compared with other reliable sources such as official portals and encyclopedias.

AI tools rely on analyzing data and learning from pre-trained datasets, yet they are not infallible, requiring careful human oversight to monitor and continually review their outputs, Dr. Hamdy highlighted.

He indicated that it is essential to verify the safety standards in AI model responses before their use, to avoid passing on any biases and to ensure that the information provided is up to date. These measures, he said, require assessing  the models’ performance using standardized test samples to identify their strengths and weaknesses.

Dr. Hamdy further called for safety standards to include preventing access to information that may harm individuals, such as promoting self-harm or violating privacy, or that may detrimentally impact communities, including incitement to violence, hate speech, rumors, and bias or discrimination among people based on religion, nationality, race, or other factors.

He emphasized the importance of refraining from using personal or sensitive data in training or feeding AI applications.

The performance of AI, he says, differs based on language, as some models in English outperform others and expose weakness in Arabic.

Strong AI models exist in mathematics and logical reasoning, yet they may fall short in literary composition, image generation, or poetry. Therefore, it is essential to test AI models within their specific domain, verify the accuracy of their information and references, and ensure their use complies with laws while respecting individuals and communities, Dr. Hamdy underlined.

He noted that numerous AI-powered applications could generate fake information displayed as reliable with the objective of garnering users to accept it without verification.

He added that these models may produce biased results, making it crucial to identify and address such biases before relying on them.