A sharp rise in AI generated videos related to the conflict involving the United States, Israel, and Iran is raising concerns among researchers and analysts who say misleading content is spreading rapidly across social media platforms. Experts warn that creators are increasingly using generative tools to produce realistic but false war footage that attracts massive engagement and advertising revenue.
According to findings reviewed by BBC Verify, fabricated videos and manipulated satellite imagery connected to the conflict have collectively received hundreds of millions of views online. Analysts say these clips often appear convincing enough to mislead audiences, particularly during fast moving military developments when people are searching urgently for updates.
Digital media researcher Timothy Graham of Queensland University of Technology said the availability of new AI tools has dramatically lowered the barrier to producing realistic conflict imagery. He noted that tasks once requiring professional video production can now be completed within minutes, allowing misinformation to spread at unprecedented speed.
Several fabricated clips have circulated widely online, including a viral video falsely claiming to show missile strikes in Tel Aviv and another depicting the Burj Khalifa engulfed in flames. Researchers say such content can undermine public trust and complicate efforts to verify authentic footage during major international crises.
The spread of manipulated satellite imagery has also become a growing concern. One widely shared image claimed to show heavy damage at a United States naval facility in Bahrain, but analysts later determined it had been created using publicly available imagery from a previous year and altered with AI tools. Experts say advances in image generation software are making it increasingly difficult to distinguish authentic visuals from fabricated ones.
Social media companies are beginning to respond to the issue. The platform X recently announced that it would temporarily suspend monetization privileges for users who post AI generated conflict footage without proper labelling. Researchers believe many accounts sharing such content are attempting to benefit from engagement based revenue programs that reward high view counts and interactions.
Experts caution that the rapid expansion of generative AI tools is creating long term challenges for information reliability online. They say the combination of automated content production and engagement driven monetization systems has accelerated the spread of misleading material, making it harder for audiences to separate verified reporting from fabricated imagery during major global events.
