Manipulated or recycled photos and videos frequently circulate during wars and crises — sometimes to deceive, sometimes simply to generate clicks. During the ongoing US‑Israel war with Iran, the problem reached a new level: photo agencies were supplied with manipulated or fake images that then ended up in newsrooms across Europe. Some images appear to have been generated with artificial intelligence, while others were digitally altered by humans. Below is what happened, what was learned and how to spot AI fakes.
The SalamPix saga
In early March Dutch media reported that ANP (Algemeen Nederlands Persbureau), the Netherlands’ largest news agency, removed roughly 1,000 Iran‑related photos from its database after suspecting some had been manipulated with AI. Two days later the Dutch broadcaster RTL said its news service, RTL Nieuws, had unknowingly used three of those images online and in its app. After ANP alerted RTL, the images were removed and RTL published an explanation identifying which pictures had been taken down and why. German weekly Der Spiegel also acknowledged using an AI‑manipulated image in its coverage before realizing it was fake.
In these cases, reputable news agencies had supplied the images. RTL received them via ANP, and Der Spiegel obtained them through dpa Picture Alliance, ddp and Imago Images — all of which had sourced the material from the French agency Abaca Press. The images were ultimately traced back to an Iranian agency, SalamPix, which provided photos to Abaca that were then distributed to multiple international agencies and into newsrooms. In response, many photo agencies blocked SalamPix or issued “kill notices” instructing clients to remove SalamPix images from publication.
How the photo agencies were fooled
In Germany the “agency privilege” is a legal concept that generally allows media outlets to rely on the authenticity of verified text, image and video material delivered by news agencies. Even global broadcasters like Deutsche Welle routinely rely on external agencies to cover events worldwide. As AI‑generated and AI‑manipulated content becomes more sophisticated and prolific, distinguishing real images from fabricated ones is increasingly difficult. The challenge is both technological and logistical: during fast‑moving breaking news, journalists and agencies must sift through massive volumes of visuals at high speed. Since the start of 2026, DW has received an average of 140,000 images per day from agencies.
“Transparency is one of our highest priorities. Whenever we show AI‑generated content, it must be clearly and unmistakably identifiable as such,” says DW Editor‑in‑Chief Mathias Stamm. “And if we make a mistake — as in the case of using images from the agency SalamPix — we acknowledge it and remain transparent.”
Examples used and identified by DW
After reports about SalamPix, DW reviewed its coverage, removed all SalamPix images from publication and issued correction statements under changed articles. One example is a seemingly realistic street scene allegedly showing the aftermath of a missile strike in Tehran: yellow cars, buildings and smoke. On closer inspection, several unmistakable AI glitches appear — writing on a wall and on a car that looks like text until zoomed in (it is nonsensical pseudo‑text, not Farsi or Arabic), bulging walls and windows that should be straight, and vehicles with odd, non‑existent shapes and unrealistic details.
Another image portrayed a man in black holding a weapon, captioned as security forces opening fire on protesters. Zooming in reveals AI errors: the two shoes are different in size and shape, the person’s shadow and some limb positions do not match the body, and a hand shows an anatomically incorrect chunk missing between thumb and fingers. DW and other outlets also found older SalamPix images with deformed fingers, misaligned windows and distorted faces — typical AI‑generation errors from that period.
How to spot AI‑generated or manipulated images
As AI tools improve, fabricated visuals become harder to distinguish from real ones, putting both everyday users and professional journalists at risk. Media organizations, including DW, are investing in staff training to detect and debunk AI manipulation. DW’s Fact Check team also produces media literacy content to help audiences recognize misleading images and videos.
Edited by: Thomas Sparrow, Joscha Weber, Cristina Burack