Social media users have grown better at spotting obvious AI doctoring in celebrity photos or glitchy cityscapes. But amid the US‑Israel war with Iran, a new form of deception has emerged: fake satellite imagery.
“For satellite images, we can safely say that the majority of people have very limited familiarity,” said Symeon Papadopoulos, an AI researcher in media verification at CERTH. “That makes them particularly prone to being misused, because if you change a small detail in a satellite image, most likely nobody will notice.”
Manipulating satellite imagery is not new — Russia faked satellite photos in 2014 — but experts say such fakes have become far more common during the current conflict. AI tools now make it easy to take a real satellite view from Google Earth or Bing Maps and add or alter features to suggest destroyed infrastructure or strategic damage, promoting narratives favorable to one side. Limited public access to high‑resolution commercial satellite imagery during the war has widened an information gap often filled by fabricated images that exploit general unfamiliarity with how satellite imagery looks and behaves.
“Many people link the complexity involved in capturing a real satellite image to a resilience against those images being faked, but there’s no such link,” said OSINT analyst Brady Africk. Satellite photos “are photos just like any other and can be vulnerable to similar manipulations.”
DW Fact Check examines several prominent examples.
Watermark reveals AI‑generated satellite images
Claim: An X post shared what looked like a satellite image of the Persian Gulf and alleged it showed burning oil fields in Qatar.
DW verdict: Fake
Although Qatar’s LNG facilities were targeted by Iranian missiles, the image in question did not show that aftermath. The image is identifiable as AI‑generated: Gemini’s watermark appears in the lower‑right corner. The picture mimics the texture and coloration of genuine satellite photos, but the fire and smoke are inconsistent with how such phenomena appear at orbital scale. A reverse‑image search suggests the fake was built from a real satellite base layer with AI‑generated fire plumes added. The AI‑detection tool ImageWhisperer flagged the image as likely AI‑generated with 73% confidence, though such tools can yield false positives and should be used cautiously.
Iranian state media shares AI‑generated “after” image of a drone strike
Claim: The Tehran Times posted two satellite images purporting to show an American radar in Qatar “before and after” it was destroyed in an Iranian drone strike.
DW verdict: Fake
The “before” image matches a genuine Google Earth capture — but not of Qatar. The site is a US naval base in Manama, Bahrain, with identical vehicle positions. The “after” image is visibly AI‑generated: building shapes change, architectural lines are inconsistent, and some elements appear artificially added. Iran did attack a US base in the region and verified satellite imagery from commercial providers (Planet Labs, Airbus) published by reputable outlets shows authentic damage. But the Tehran Times’ “after” frame is not one of those verified captures. An analysis by ImageWhisperer noted repetitive, unrealistic debris patterns and damage that does not conform to the engineering of the radar systems depicted. The case illustrates how a real image can be paired with a fabricated “after” to create a compelling — yet false — visual narrative.
Fake account poses as Chinese intelligence company
Claim: An account impersonating Shanghai‑based geospatial firm MizarVision posted images allegedly showing burning oil fields in Qatar.
DW verdict: Fake
The account was fraudulent. MizarVision publishes on Weibo and WeChat and publicly warned in February that any accounts on X or other overseas platforms using its name are impostors. The fake X account, created in January and falsely claiming a Portland location, used stolen logos and posted images bearing MizarVision watermarks before being removed. One image from the impostor displayed a heavily filtered black‑and‑white “satellite” view of Qatar’s Ras Laffan refinery with repeated, nearly identical plumes of smoke — a sign the explosions were cloned rather than captured by genuine satellite sensors. A Google Earth search shows the underlying tank layouts are real, but the plumes were added artificially.
Caution suggested with satellite imagery
As satellite imagery becomes a powerful tool in journalism and warfare, AI‑manipulated visuals pose a growing threat to public understanding. False or altered images can travel fast, shaping narratives long before experts debunk them. Genuine satellite data remains vital for documenting events, but distinguishing it from fabricated material requires vigilance from platforms, media organizations, and users. Developing digital literacy and a healthy skepticism toward dramatic “satellite” revelations is essential.
Edited by: Rachel Baig