Are we being overrun by ‘AI slop’?
Ashok Swain on how synthetic AI-generated output is quietly taking over our social media feeds

The flood of imitative posts, automated replies and machine-remixed images now saturating social platforms has a new name — ‘AI slop’. It’s not an accidental side-effect of new technology but an outcome of corporate design. Over the past several years, platforms such as Meta have quietly shifted focus from human-generated content to largescale synthetic output, slurry-like content that often looks real enough but is not — hence ‘AI slop’.
This is not a small tweak — it’s being presented as a new epoch of social networking. The company’s AI personas, Meta AI chatbots and new synthetic ‘Discovery feed’ are all meant to inject huge volumes of machine-generated material into Facebook and Instagram. A platform built around the promise of online community and the lure of sharing our life with that community is now set upon inundating our screen with machine-generated content.
The impetus is straightforward. Human posting is declining. Meta has acknowledged that original user-generated posts have been dropping for years. The intimate networks that once drove Facebook’s dominance — family photos and friend updates — no longer produce the engagement that advertisers demand.
Platforms such as TikTok have revealed that what really drives digital attention is not human connection but frictionless algorithmic entertainment. If people will not generate enough content to fuel the machine, then the machine will simply generate it for them.
This shift aligns perfectly with advertising logic. A larger volume of cheap and quickly produced AI content creates more opportunities for engagement. The goal is no longer to show what our friends are doing but to deliver whatever the algorithm predicts will keep us scrolling — AI-generated reels, text posts written by chatbots and synthetic images tuned for virality.
Also Read: The right to stay real
In this system, quantity overrides quality. The deluge of machine-generated filler is more profitable than a smaller number of thoughtful human contributions.
The result is a disorienting user experience. Social feeds are increasingly being filled with artificial travel photos, AI-generated influencers and inspirational quotes written by chatbots. Countless TikTok videos and Instagram reels now use AI narration, stitched stock footage and scripted clickbait phrasing.
On X, one can keep scrolling through timelines overrun by bots and what security researchers describe as ‘zombie content’ — machine-generated posts designed to harvest clicks or impersonate news. ABC News last year documented how X is struggling with an AI spam deluge, including bots generating endless fake stories, fabricated celebrity photos and AI-generated political memes.
Looks like ‘social’ media, built on the promise of online intimacy and the desire to share curated glimpses of our lives — graduation ceremonies, the good ol’ days, family dinners, a new job/car/house/travel destination etcetera, will soon not be so ‘social’ anymore as manufactured content overwhelms our feeds. It’ll keep us hooked with machine-generated content built around machine-predicted behaviour.
The shift also creates an environment in which manipulation becomes far easier, and evidence shows how quickly AI-generated content can be weaponised.
AI-driven synthetic influence campaigns are growing rapidly. These rely on networks of AI-generated personas, sometimes called ‘soul bots’, capable of producing convincing political messaging at an enormous scale. These are not the clumsy bots of a decade ago. They are coordinated AI-driven actors able to build rapport, adjust language style, and mimic human emotional cues.
In Spain, researchers documented waves of automated pro-migrant and anti-migrant messaging during the 2023 election cycle, much of it produced by AI systems that amplified fringe narratives that would otherwise not have gone viral. Similar AI-aided misinformation fuelled anti-immigrant unrest in Spain in the summer of 2025.
In India, networks of AI-generated political avatars have been found producing divisive propaganda in multiple languages, timed to influence state and national elections.
Political parties, particularly the BJP, have also released AI-generated videos of leaders speaking languages they do not speak, videos indistinguishable to many. AI-generated content is increasingly being deployed across social media to portray Prime Minister Narendra Modi in favourable, attention-grabbing ways — from hyperreal deep-fake ads to automated narratives that inflate his public image.
During the 2024 Taiwan elections, thousands of AI-generated videos and manipulated images spread across TikTok, LINE and Facebook, many originating abroad and designed to confuse, polarise or demobilise voters.
Across Latin America, from Mexico to Colombia, AI-generated celebrity impersonations, automated news anchors and synthetic political endorsements are swaying public opinion. The line between organic political communication and algorithmically engineered persuasion has blurred, raising concerns about distortion of the democratic discourse and accountability.
Far from making any attempts to contain malicious synthetic activity, social media platforms like Meta are introducing their own synthetic accounts and content streams. With the platform itself filling the feed with AI-generated engagement boosters, malicious actors no longer need to hide in the noise; the noise hides them.
Synthetic content is also playing with our psychological wellbeing. Studies already show that social media distorts self-image by exposing users to unrealistic depictions of life. AI multiplies this effect dramatically. The rising flood of AI-generated ideal families, fit bodies, travel fantasies and aspirational home interiors is creating a digital universe designed to look better than reality.
These polished hallucinations increase anxiety, deepen alienation and generate impossible expectations. For younger users, especially teenagers, the distinction between real and synthetic influencers is collapsing, with measurable consequences for mental health.
There is also a deeper cultural cost. Social platforms are now part of civic infrastructure. They shape public conversation, collective memory and political understanding. When these spaces are dominated by synthetic material, they stop reflecting lived experience. Algorithmic simulation drowns out human voices, especially voices that do not conform to algorithmic patterns of virality.
The quiet takeover of our feeds by AI content signals a turning point. It is changing not only what we consume but how we experience one another. It is normalising synthetic verisimilitude and overshadowing human expression.
Ashok Swain is a professor of peace and conflict research at Uppsala University, Sweden. More of his writing may be read here
