
Experts are concerned that using AI to enhance images of real events in the Middle East war—making them clearer and more intense—may lead to distortion of facts and undermine the credibility of genuine news.
Foreign media report that online platforms are being flooded with rumors and false information heavily driven by artificial intelligence (AI) technology, while the conflict in the Middle East remains intense. More worrying than entirely fabricated images are "real event images" that have been AI-enhanced to appear clearer or more violent, resulting in misinformation diverging from the truth.
A clear example is an image of an American pilot ejecting from a fighter jet and kneeling before a Kuwaiti man. The image is highly detailed and widely circulated online, with several news outlets publishing it. However, upon closer inspection, the pilot's hand shows only four fingers. After foreign media examined the image, they found Google's "SynthID" watermark, indicating the image was generated by AI.
Nevertheless, the event itself actually occurred, supported by video and satellite photo evidence confirming that Kuwait shot down three U.S. warplanes on 2 March. The original images were low resolution and unclear, but when AI was used to enhance the clarity, it added details that did not exist, causing distortions from the original reality.
Evangelos Kanoulas, an AI professor at the University of Amsterdam, stated that AI can adjust textures, faces, lighting, or backgrounds to make images appear more realistic than the originals. This technology could be misused to create scenarios, such as making protests seem more violent or crowds appear larger than they truly are.
In another case, an image of a large fire near Erbil Airport in Iraq following an Iranian attack on 1 March was AI-enhanced. While the fire was real, AI exaggerated the size and vividness of the flames and smoke beyond the original image to make it look more frightening.
The most alarming issue is when AI "hallucinates"—adding new details that never existed. For instance, in the case of an immigration officer in Minneapolis, USA, fatally shooting Alex Pretty in January, the original video shows Pretty holding a "mobile phone," but the AI-enhanced image made the object appear to be a "firearm."
James O’Brien, a computer science professor at the University of California, Berkeley, warned, "Even minor edits can completely change people’s perception of events." The greatest danger is public loss of trust, where even authentic images might be suspected of being fake.
Read moreMiddle East War