• AI News
  • Blog
  • Contact
Tuesday, December 30, 2025
Kingy AI
  • AI News
  • Blog
  • Contact
No Result
View All Result
  • AI News
  • Blog
  • Contact
No Result
View All Result
Kingy AI
No Result
View All Result
Home AI News

The AI Slop of 2025: How Low-Quality AI Content Took Over the Internet

Gilbert Pagayon by Gilbert Pagayon
December 30, 2025
in AI News
Reading Time: 16 mins read
A A
The AI Slop Crisis

The internet has a new villain, and it’s everywhere you look. From Facebook feeds cluttered with bizarre images of Jesus made of shrimp to YouTube channels pumping out synthetic cat soap operas, 2025 became the year that “AI slop” went mainstream. This low-quality, algorithmically generated content has flooded every corner of the digital world, earning the dubious honor of being named Merriam-Webster’s Word of the Year and sparking urgent conversations about the future of online spaces.

But what exactly is AI slop? And why has it become such a pervasive problem?

What Is AI Slop?

AI slop refers to the deluge of low-quality, unwanted AI-generated content that clogs social media platforms, search engines, and shopping sites. It’s the product of generative AI tools like ChatGPT, Dall-E, and Midjourney being used to create content at scale with minimal human oversight or creative input.

The term has been circulating online since the early 2020s, but data shows it reached a new peak this year. According to online media company Meltwater, mentions of “AI slop” across the internet increased ninefold from 2024 to 2025. Negative sentiment hit a high of 54 percent in October.

Even more alarming, AI-generated articles now make up more than half of all English-language content on the web, according to search engine optimization firm Graphite. The phenomenon has become so significant that Australia’s national dictionary also selected “AI slop” as its Word of the Year for 2025.

The Viral Moments That Defined AI Slop

The rise of AI slop has been marked by several bizarre cultural moments that captured the internet’s attention. In 2024, Facebook was briefly flooded with AI-generated images of Jesus fused with crustaceans a phenomenon that became known as “Shrimp Jesus.” This was quickly followed by other hallmarks of the genre: videos of elderly women claiming to celebrate their 122nd birthday and mini soap operas featuring the dramatic lives of cats.

By 2025, the flood had grown more uncanny and more explicitly copyright-violating. Spring brought the advent of “Ghiblification” a trend where users rendered images in the style of Hayao Miyazaki’s Studio Ghibli. Everyone from Nayib Bukele to the White House participated, with some even creating Ghibli-style images of deportations.

The trend was enabled by OpenAI’s release of an image generator powered by GPT-4o. Sam Altman, OpenAI’s chief executive, jumped on the bandwagon by Ghiblifying his X profile. Ironically, Miyazaki himself has said about artificial intelligence: “I would never wish to incorporate this technology into my work at all. I strongly feel this is an insult to life itself.”

Other viral AI slop moments followed throughout the year. Videos of AI-generated obese people participating in the Olympics circulated widely. Pressure cookers exploding became a popular theme. And Ibrahim Traoré, the leader of the military junta in Burkina Faso, became the centerpiece of an AI slop cult featuring videos of Justin Bieber supposedly singing on the streets of Ouagadougou.

The Business of AI Slop

While many view AI slop as digital pollution, for others it represents a legitimate business opportunity. The Guardian spoke with Oleksandr, an AI YouTube creator based in Chernivtsi, Ukraine, who turned to AI content creation after retiring from professional volleyball.

“I was deep in debt,” Oleksandr explained. “My girlfriend had left me, my parents were living in occupied Mariupol.” He started joining Telegram channels and watching YouTube videos on how to make money from the platform.

His first efforts were music channels playing AI-generated music over images of “sexy AI girls.” He had seven channels covering different genres: retrowave, rock, jazz, and more. At first, he put significant effort into each video, but quickly realized that quality didn’t matter on YouTube. “It was a conveyor belt, with fairly low quality,” he said.

At the high point of his business, Oleksandr had a team of 15 people operating 930 channels, 270 of which he successfully monetized. They cleared up to $20,000 a month at one point, though YouTube often blocked or took down channels for unclear reasons.

His content evolved over time. One profitable niche was life stories long anecdotes written by ChatGPT or Gemini, overlaid with visuals. “Grandparents listen to it before bed, or while walking in the park,” he noted.

Another lucrative niche was videos on “vulgar adult themes” such as erotic tractors which were in great demand but bordering on what YouTube allows. These channels were riskier to produce but easier to monetize because they had less competition. “With erotic ones it’s easier, because they are blocked more often, so not many people want to bother and periodically recreate channels,” Oleksandr said.

However, Oleksandr estimates that only the top 5 percent of AI content creators ever monetize a video, and only 1 percent make a living from it. YouTube has become more aggressive with takedowns, forcing him to constantly recreate channels. His team now takes in closer to $3,000 monthly.

“To make money here, you need to spend as little as possible,” he said. “YouTube is basically just clickbait and sexualization, no matter how morally sad it is. Such is the world and the consumer.”

The Global Economy of AI Content

AI slop creator is now a profession, and these creators come from everywhere from the United States to India to Kenya to Ukraine. Arsenii Alenichev, who studies the production of images in global health, noticed a flood of “AI poverty porn” on major stock photo sites earlier this year. Many of the creators appeared to have eastern European usernames.

“I wouldn’t be surprised if these are just artists that are trying to generate extreme images of everything, hoping that someone would buy them,” Alenichev said.

In some ways, AI tools have enabled a strange globalization of content. The barriers to entry are remarkably low: no plot, no exposition, just surreal imagery optimized for engagement. This has created opportunities for people in economically disadvantaged regions to participate in the attention economy, even if the content they produce is widely considered worthless.

The User Experience Problem

For product designers and user experience professionals, the proliferation of AI slop represents a fundamental misunderstanding of what users actually want. Kate Moran, vice president of research and content at Nielsen Norman Group, says there’s been enormous pressure on designers to integrate AI features everywhere, even when it doesn’t make sense.

“In the design space, there’s a lot of pressure to show the shareholders, ‘Look, we put AI in our product,'” Moran told Euronews Next. “This is technology-led design, starting with the tool, and then trying to look for a problem that potentially that tool could solve.”

She gave the example of Meta introducing an AI search function on Instagram last year that replaced the traditional search bar. “They backpedaled really fast because I’m sure people were furious,” she said. “You believe a search bar does a certain thing, and then all of a sudden, when you start typing in there, you’re talking to an AI chatbot and you didn’t want that. That’s a bad experience.”

Meta has been particularly active in embracing AI tools and AI-generated content. In response to OpenAI’s Sora app, Meta introduced “Vibes” to European markets in November a platform described as “a brand new feed where you can create and share short-form, AI-generated videos, remix content from others, and explore a world of imaginative possibilities.”

But according to internal data seen by Business Insider, Vibes hasn’t made much of a splash in Europe, bringing in just 23,000 daily active users in the first weeks after its launch. The largest audiences were found in France, Italy, and Spain, with 4,000 to 5,000 daily active users in each country.

This year also saw AI-focused consumer hardware receive scathing reviews. Products like the Humane AI Pin were criticized by users and executives alike, including Logitech’s CEO Hanneke Faber, who said in an interview with Bloomberg: “What’s out there is a solution looking for a problem that doesn’t exist.”

More Than Just Bad Content

While it’s easy to dismiss AI slop as merely low-quality content, philosopher Thai Vo-Nhu argues in the American Philosophical Association blog that we’re witnessing something more profound: a crisis of authenticity.

“To treat this merely as a quality control issue is a mistake,” Vo-Nhu writes. “We are witnessing a crisis not of quality but of authenticity.”

Vo-Nhu draws on Confucian philosophy to explain the deeper problem with AI slop. He compares AI-generated content to what Confucius called the “Village Worthy” someone who appears virtuous but lacks any genuine moral core. The Village Worthy follows visible customs and says the right things at the right times, but is preoccupied only with public opinion, adjusting words and actions to please audiences.

“He is a thief because he is an ‘appearance-only’ hypocrite,” Vo-Nhu explains. “The Village Worthy has no secret self to hide. He is a chameleon, but not because he is hiding a face. In fact, he has no face to unmask, no internal moral core to betray.”

Large language models, Vo-Nhu argues, are the mechanization of this character type. They produce the forms of virtue or creativity, or empathy, or intimacy without any corresponding substance. They’re designed to exploit the human tendency to attribute intent to language, mass-generating pleasing, empty content.

The Real-World Consequences

The consequences of AI slop extend beyond mere annoyance. Vo-Nhu recounts reading about a writer who used Passare, a cloud platform used by funeral directors, to draft a notice for a parent. The AI generated a sentence about how the deceased found “joy in the gentle keys of her piano.”

Except the deceased didn’t own a piano or play one. The machine made it up because “grandmother” and “piano” are statistically adjacent in its training data.

“When we allow the algorithm to perform our rituals, we are filling the village with worthies who smile and nod and generate agreeable content on demand, while the substance of our lives the un-optimizable fact of feeling and being—leaks away,” Vo-Nhu writes.

The same ethical line connects banal AI slop to more serious harms like deepfake pornography. Both represent what Vo-Nhu calls “appearance-only fabrication” content that maintains the icon or likeness of something real while severing the indexical connection to actual human experience.

The Path Forward

Despite the overwhelming presence of AI slop, there are signs that the tide may be turning. Backlash has led some platforms to introduce features allowing users to limit AI-generated content. Pinterest and YouTube now offer options to filter out such material.

According to Moran, the most useful AI features are often the least flashy. She cites Amazon’s AI-generated summary of product reviews as an example of a practical tool that enhances user experience without fundamentally changing how people interact with the service.

“The things that this technology can do that are really useful, that I think actually are changing the products that we design and the way that we work, are not the sexy things,” Moran said. “Being able to give a quick qualitative summary of how people feel about that product is really valuable and it requires zero interaction. All people have to do is read it.”

Daniel Mügge, a researcher at the University of Amsterdam studying European governance of AI through the RegulAite project, believes tech companies have been engaged in a misguided race. “I think what is clear and should be genuinely worrying is that a number of these companies have been engaged in a kind of race among themselves,” he told Euronews Next.

Mügge argues that generative AI has received disproportionate attention considering its relatively limited economy-wide impact. He suggests the European Union would be better off investing in AI that tackles specific societal issues, such as robotics or manufacturing.

“We see that quite a bit of AI investment actually ends up in applications that make society a worse and not a better place,” Mügge said, citing AI tools in advertising as an example. “That’s the sort of investment that I think we actually don’t need, and if we don’t have it here in Europe, for example, it’s a good thing rather than a bad thing.”

A Different Future

The AI Slop Crisis

Making room for smaller companies that make useful products which may not grab as much attention could be a good way for tech ecosystems in Europe to carve out a separate path, according to Mügge.

“I think there’s a lot of scope for relatively smaller, much more specialized companies to play a meaningful role in that, and then you don’t need to worry so much that there is no European competition to OpenAI,” Mügge said.

Both Mügge and Moran agree that the AI hype seems to be giving way to more intentional product design and strategy focused on actual impact rather than flashy features.

“Nobody knows what’s going to be next or where the technology is going to evolve from here,” Moran said. “Right now, these smaller, more narrowly-scoped features are a lot easier for people to use, and even if they’re not flashy or sexy, they can make a really big difference in people’s lives.”

The Question of Responsibility

A YouTube spokesperson responded to questions about AI slop by stating: “Generative AI is a tool, and like any tool it can be used to make both high- and low-quality content. We remain focused on connecting our users with high-quality content, regardless of how it was made. All content uploaded to YouTube must comply with our community guidelines, and if we find that content violates a policy, we remove it.”

But this response highlights a fundamental tension in how we think about AI-generated content. Is AI truly just a neutral tool, like a pen or a typewriter? Or does its specific architecture designed to predict and produce forms that please audiences without any reference to truth or understanding make it something qualitatively different?

As Vo-Nhu points out, a pen doesn’t autocomplete a forgery. A typewriter doesn’t hallucinate a believable lie to please its user’s ego. Large language models are designed to exploit human psychology in ways that traditional tools are not.

Looking Ahead to 2026

The AI Slop Crisis

As 2025 draws to a close, the question remains: Is the internet ready to grow up? The proliferation of AI slop has forced a reckoning with how we design, consume, and value online content.

The optimistic view is that we’re moving past the initial hype cycle and toward more thoughtful integration of AI tools. Platforms are beginning to respond to user backlash. Designers are pushing back against pressure to add AI features everywhere. And there’s growing recognition that “boring” AI practical tools that solve specific problems may be more valuable than flashy generative features.

The pessimistic view is that the economic incentives driving AI slop remain firmly in place. As long as platforms reward engagement over quality, as long as there are people desperate enough to operate 930 YouTube channels simultaneously, and as long as tech companies feel pressure to demonstrate AI integration to shareholders, the flood of low-quality content will continue.

What’s clear is that AI slop is more than just an annoyance or a quality control issue. It represents a fundamental challenge to authenticity, creativity, and human connection in digital spaces. How we respond to this challenge will shape the internet and perhaps our culture more broadly for years to come.

The Village Worthy, as Confucius warned, is the thief of virtue. He’s more dangerous than the open villain because he confuses the community and lowers standards for everyone. In 2025, we’ve mechanized the Village Worthy and given him the keys to our digital commons.

The question for 2026 is whether we’ll finally take them back.


Sources

  • Euronews: 2025 was the year AI slop went mainstream. Is the internet ready to grow up now?
  • The Guardian: From shrimp Jesus to erotic tractors: how viral AI slop took over the internet
  • American Philosophical Association Blog: The Thief of Virtue: “AI slop” is more than just bad content
Tags: AI EthicsAI SlopArtificial IntelligenceDigital AuthenticityGenerative AI
Gilbert Pagayon

Gilbert Pagayon

Related Posts

OpenAI Head of Preparedness
AI News

OpenAI Gets a New Safety Lead: ‘Just in Case’ Mode Activated

December 30, 2025
Your Year with ChatGPT
AI News

ChatGPT Wrapped? OpenAI Introduces ‘Your Year with ChatGPT’ Annual Recap Feature

December 26, 2025
Meta AI Glasses v21
AI News

Meta’s AI Glasses v21: Conversation Focus, Spotify Integration, and the Future of Smart Wearables

December 25, 2025

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Recent News

OpenAI Head of Preparedness

OpenAI Gets a New Safety Lead: ‘Just in Case’ Mode Activated

December 30, 2025
The AI Slop Crisis

The AI Slop of 2025: How Low-Quality AI Content Took Over the Internet

December 30, 2025
Your Year with ChatGPT

ChatGPT Wrapped? OpenAI Introduces ‘Your Year with ChatGPT’ Annual Recap Feature

December 26, 2025
Meta AI Glasses v21

Meta’s AI Glasses v21: Conversation Focus, Spotify Integration, and the Future of Smart Wearables

December 25, 2025

The Best in A.I.

Kingy AI

We feature the best AI apps, tools, and platforms across the web. If you are an AI app creator and would like to be featured here, feel free to contact us.

Recent Posts

  • OpenAI Gets a New Safety Lead: ‘Just in Case’ Mode Activated
  • The AI Slop of 2025: How Low-Quality AI Content Took Over the Internet
  • ChatGPT Wrapped? OpenAI Introduces ‘Your Year with ChatGPT’ Annual Recap Feature

Recent News

OpenAI Head of Preparedness

OpenAI Gets a New Safety Lead: ‘Just in Case’ Mode Activated

December 30, 2025
The AI Slop Crisis

The AI Slop of 2025: How Low-Quality AI Content Took Over the Internet

December 30, 2025
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2024 Kingy AI

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • AI News
  • Blog
  • Contact

© 2024 Kingy AI

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.