Fake news emerges as parallel battlefield in Iran war

Studies, fact-checkers and lawmakers point to multi-sided information war as synthetic content and competing claims reshape perception

Restricted access to Iran for many journalists complicates reporting and verification.
i
user

Pratyaksh Srivastava

google_preferred_badge

As the conflict involving Iran unfolds, a growing body of research and reporting suggests that information itself has become a central battleground, with competing narratives, restricted access and rapidly evolving technologies shaping how events are perceived.

A preprint study titled Cognitive Warfare: Definition, Framework, and Case Study published on 6 March, 2026, authored by academics affiliated with the US Air Force and the US Air Force Academy stated that modern conflicts increasingly involve “cognitive warfare”, referring to efforts to influence public thinking through strategic messaging.

The study said information about military activity should be understood as “another domain of conflict, much like air, land and sea”, where actors attempt to influence audiences beyond the immediate battlefield.

From 'rally ’round the flag' to contested narratives

The study noted that during earlier US military engagements in Vietnam, Iraq and Afghanistan, journalists often relied heavily on official sources and at times “rallied ’round the flag”, amplifying government narratives about military action.

In the current conflict, however, the information environment is seemingly more fragmented, with tensions between governments and media shaping coverage.

It cited a 14 March exchange in which Brendan Carr, chairman of the Federal Communications Commission, responded to a post by Donald Trump criticising media reporting on US involvement in Iran, warning broadcasters over licence renewals tied to “public interest”.

In the original post, Trump said, “The People of our Country understand what is happening far better than the Fake News Media!”

The study said such developments reflect a “hostile relationship” between political leadership and sections of the media, which forms part of the broader information environment around the conflict.

Limited access and verification gaps

The study emphasised that restricted access to Iran for many journalists complicates reporting and verification.

It said that while information continues to emerge through citizen reporting and social media, it remains “hard to verify and interpret”.

As an example of real-world events entering the information ecosystem, the study referenced imagery of mass graves following a bombing at a girls’ school in Minab, noting how such visuals circulate globally but are subject to interpretation and framing.

The study advised readers to examine what information sources have access to and what may be missing from a given report, warning against extrapolating conclusions from limited data.

The study stated that media produced by diaspora communities can provide contextual insight but may also reflect political or strategic positions.

It advised readers to consider how personal experience and identity may shape how events are presented, without necessarily requiring audiences to verify every claim independently.

It added that different actors—including politicians and journalists—may emphasise different aspects of the conflict, such as military progress, civilian impact or diplomacy, depending on their objectives.

AI-generated misinformation expands

Separately, BBC Verify reported that AI-generated videos, fabricated satellite imagery and manipulated visuals related to the conflict have accumulated hundreds of millions of views online.

Timothy Graham of Queensland University of Technology said “the scale is truly alarming”, adding that AI tools have significantly lowered the barrier to producing convincing conflict footage.

“What used to require professional video production can now be done in minutes with AI tools,” Graham said.

BBC Verify cited examples including a widely shared video falsely depicting missile strikes on Tel Aviv and another showing Dubai’s Burj Khalifa in flames, both of which were AI-generated.

The report also identified fabricated satellite imagery claiming damage to a US naval base in Bahrain, created using publicly available images and AI tools.

Mahsa Alimardani of the Oxford Internet Institute noted that such content can undermine trust in verified information and make it harder to document real events.

Henry Ajder, an AI expert, stated that the availability of tools capable of producing realistic manipulations is “unprecedented”, while Victoire Rio said the spread of such content has accelerated because “the pipeline onto social media can now be almost fully automated”.


Monetisation and platform dynamics

BBC Verify reported that some creators are using AI-generated conflict content to generate engagement and revenue through platform monetisation systems.

Graham said “once you're in, viral AI-generated content is basically a money printer”.

The platform X said it would suspend monetisation for accounts sharing AI-generated conflict content without labels, according to the report.

However, Graham also said “engagement-driven monetisation and accurate information are fundamentally in tension”.

In the United Kingdom, MPs criticised social media platforms for failing to curb misinformation during a parliamentary hearing.

According to The Guardian, Dame Chi Onwurah said MPs had observed “fake photos of burning US aircraft carriers” and “fake evidence” linked to missile attacks in Iran.

She told platform representatives that their efforts to address online harms were “not working”.

George Freeman said a deepfake video falsely showing him changing political affiliation was “seriously disruptive”, while Freddie van Mierlo said he had found examples of AI tools being used to generate harmful manipulated images.

Rethinking media literacy

While media literacy is often promoted as a solution to misinformation, the study said it can be time-consuming and impractical during fast-moving conflicts.

Instead, it suggested a simplified approach: assume that information is contested and ask key questions such as why a particular piece of information is being presented and what may be omitted.

The convergence of restricted access, strategic messaging, AI-generated content and platform incentives has created what researchers describe as a complex and contested information environment.

The Air Force-affiliated study said readers should treat information not as neutral but as something “someone wants a reader to see”.

As the conflict continues, the study concluded that audiences are not merely observers but participants in how information is circulated and interpreted, making the information domain a central component of modern warfare.