More than 20% of videos shown to new YouTube users are ‘AI slop’, study finds

Lead: A new company study and subsequent checks show that more than one in five videos served to brand-new YouTube accounts are low-quality, AI-generated content designed to harvest views. Researchers at Kapwing surveyed 15,000 of the world’s most popular channels and created a fresh account to sample recommendations, finding a heavy presence of so-called “AI slop” in early feeds. The material ranges from cartoonish children’s shorts to decontextualised disaster clips and appears highly monetised despite its low editorial value. The researchers and independent observers warn this trend is creating a fast-growing industry around templated, algorithm-friendly videos.

Key takeaways

  • Kapwing analysed 15,000 leading channels (the top 100 in every country) and identified 278 channels that the researchers classify as exclusively “AI slop.”
  • The 278 AI-focused channels have accumulated roughly 63 billion views and about 221 million subscribers, with an estimated annual revenue near $117 million (≈£90m).
  • When researchers made a new YouTube account, 104 of the first 500 recommended videos were labelled AI slop; roughly one-third of the 500 were in the broader “brainrot” category of low-quality attention-grabbing clips.
  • AI slop channels are globally distributed: Kapwing reported ~20 million followers in Spain, 18 million in Egypt, 14.5 million in the US and 13.5 million in Brazil among trending AI channels.
  • Individual channel examples: Bandar Apna Dost (India) — 2.4bn views and an estimated up to $4.25m annual revenue; Pouty Frenchie (Singapore) — 2bn views and ~ $4m estimate; Cuentos Facinantes (US) — 6.65m subscribers; The AI World (Pakistan) — 1.3bn views.
  • Guardian reporting earlier this year found nearly 10% of YouTube’s fastest-growing channels fall into the AI-slop category, despite platform efforts to limit inauthentic content.

Background

AI-generated video content has moved from experimental tools into a scalable production model in under two years. Advances in text-to-video, generative editing and synthetic audio let creators assemble short clips at low cost and fast pace, and platforms’ recommendation systems often reward rapid engagement. That combination has encouraged individuals and small teams across many countries to build channels that prioritise attention-capturing hooks over factual or narrative substance. Kapwing’s survey targeted the most-watched channels globally to measure how widespread those techniques have become among high-performing creators.

The economic context matters: in many middle-income countries, earnings from monetised social-video channels can exceed local median wages, creating a financial incentive to produce volume-driven content. The production ecosystem is semi-organised, with producers sharing tips on Telegram, Discord and other forums, and paid instruction marketed to would-be creators. Platforms have community guidelines and advertiser policies, but enforcement is uneven and monetisation pathways are not always transparent, which allows some low-quality channels to generate sizeable returns.

Main event

Kapwing’s cross-country scrape of top channels and a hands-on experiment with a new YouTube account form the core findings. The company reports 278 channels that appear entirely composed of templated, AI-assisted videos; collectively, those channels recorded roughly 63 billion views and 221 million subscribers. Revenue models are estimated from public view metrics and typical ad rates, producing a headline figure of about $117 million per year for the group.

The fresh account test revealed how recommendations expose new users to this content: 104 of the first 500 recommended clips were called AI slop, and one-third of the first 500 fell into a broader low-quality category dubbed “brainrot.” The sampled videos include absurdist children’s shorts, repetitive clip compilations, and melodramatic or decontextualised footage of real events set to calming soundtracks — formats designed to loop attention rather than inform or entertain deeply.

Several high-performing channels identified by Kapwing illustrate the format’s range. Bandar Apna Dost (India) features surreal, action-oriented animations and has 2.4 billion views; Pouty Frenchie (Singapore) posts short, child-directed vignettes and has 2 billion views; Cuentos Facinantes (US) combines simple cartoon stories with 6.65 million subscribers; The AI World (Pakistan) publishes AI-generated shorts of floods and other disasters and has 1.3 billion views. Kapwing estimated multi-million-dollar potential earnings for top performers, though those are model-based figures.

Researchers and journalists say the content is typically produced to hit metrics the recommendation system favours — high click-through rates, strong early retention and repeat viewing — rather than to meet editorial standards. That optimisation often produces videos that are decontextualised, emotionally blunt, and easy for global audiences to consume regardless of language or culture.

Analysis & implications

The growth of AI-slop channels highlights a gap between platform policy and platform incentives. Recommendation algorithms reward engagement signals irrespective of production method, so cheaply produced, repeatable formats can scale quickly. That creates pressure for platforms to strengthen signals that favour substance over synthetic novelty, but doing so risks false positives that suppress legitimate creators experimenting with generative tools.

Economically, AI slop forms a low-cost supply that can displace more expensive, original content in discoverability. If ad revenue continues to flow to high-volume, low-quality channels, creators who rely on deeper storytelling or original reporting may struggle for visibility. Advertisers and brand-safety frameworks will face new questions: should monetisation be restricted by provenance (human-authored vs generative) or by measurable quality signals?

There are societal risks beyond creator economics. Decontextualised or manufactured footage of disasters and traumatising scenes — sometimes set to soothing audio — can mislead or desensitise viewers and complicate journalistic coverage of real events. For children’s content, repetitive, algorithm-optimised clips may crowd out age-appropriate programming and raise regulatory scrutiny from child-protection advocates. Internationally, channels originating in a handful of production hubs can reach vast audiences, amplifying cultural memes and tropes without local editorial oversight.

Comparison & data

Channel Approx. views Subscribers Kapwing estimated annual revenue
Bandar Apna Dost (India) 2.4bn Up to $4.25m
Pouty Frenchie (Singapore) 2bn ~$4m
Cuentos Facinantes (US) 6.65m
The AI World (Pakistan) 1.3bn
All AI-slop channels (survey) 63bn 221m ~$117m

The table summarises Kapwing’s headline numbers and channel snapshots drawn from the company’s analysis and the Guardian’s reporting. The data illustrate how a relatively small set of prolific channels can account for a significant share of views and monetisation within the sampled population. However, platform-scale denominators (total YouTube views per year, share of AI-originated views across the whole site) remain undisclosed by Google and are therefore not available for a full proportional assessment.

Reactions & quotes

Platform responses emphasise policy and enforcement while acknowledging generative tools’ dual-use nature. YouTube stressed that generative systems are a content tool, and that policy compliance remains central to removal decisions. The company reiterated a commitment to surfacing high-quality content but did not provide public breakdowns of how AI-origin content is treated in ranking systems.

“Generative AI is a tool, and like any tool it can be used to make both high- and low-quality content. We remain focused on connecting our users with high-quality content, regardless of how it was made.”

YouTube spokesperson (official statement)

Independent researchers and journalists who track the trend point to large, informal communities that trade strategies for producing algorithm-friendly clips. They describe markets for templates, thumbnail tactics and soundtracks that maximise repeat viewing, and note that many creators in this space are operating where the financial upside outweighs local wages. Observers caution that ecosystems of advice and paid courses can distort incentives and attract scammers.

“There are these big swathes of people on Telegram, WhatsApp, Discord and message boards exchanging tips and ideas [and] selling courses about how to sort of make slop that will be engaging enough to earn money.”

Max Read (journalist)

Researchers who study digital rights see the formats’ style—absurd, hyperbolic, plot-light—as a feature that lowers barriers for casual viewers and increases virality. Such features can help explain why the most-viewed AI slop channels attract massive audiences despite limited storytelling. Those analysts warn that the combination of simplicity and repetitive hooks is what makes these channels so effective at scale.

“Its popularity most likely stems from its absurdity, its hyper-masculine tropes and the fact that it lacks a plot, which makes it accessible to new viewers.”

Rohini Lakshané (researcher on technology and digital rights)

Unconfirmed

  • The exact share of total YouTube views represented by AI-generated content remains unknown because Google does not publish aggregate figures for AI-originated views.
  • Revenue estimates for individual channels are modelled from public metrics and typical ad rates; actual earnings may differ due to policy enforcement, CPM variance and undisclosed partnership deals.
  • Attribution of some content to purely AI generation versus mixed human–AI workflows is uncertain without access to creators’ production records.

Bottom line

The Kapwing findings and complementary reporting expose a rapidly scaling class of low-cost, generative-video channels that secure substantial reach and monetisation. Platforms’ engagement-driven recommendation systems can reward templated content even when that content lacks editorial substance, creating incentives for volume and replication. Addressing the challenge will require clearer transparency on how generative content is treated in ranking and monetisation, sharper quality signals in recommendation systems, and possibly new advertiser safeguards.

For viewers and policymakers, the phenomenon raises questions about discoverability, media literacy and child protection, as algorithmic exposure may normalise decontextualised or sensationalised content. Monitoring by independent researchers, greater platform transparency, and targeted policy updates could help rebalance incentives so that originality and trustworthy information retain visibility on major video platforms.

Sources

Leave a Comment