Spend some time scrolling on social media these days and you are likely to notice more and more videos made with artificial intelligence. Many are funky or fantastical. Others are downright bizarre. Some are intentionally misleading.
Rapid advancements in AI have led to a proliferation across the internet of what critics are calling "AI slop," or short videos that are rapidly produced, often repetitive, and made using generative AI technology. These videos, often characterized by their low production value, nonsensical narratives, and reliance on generic AI-generated elements, are flooding platforms like YouTube, TikTok, and Instagram. This surge in AI-generated content is raising concerns about the impact on content quality, the discoverability of original creators, and the overall online ecosystem. Platforms are grappling with how to handle them, facing the challenge of balancing innovation and creativity with the need to maintain a high standard of content and user experience. The rise of AI slop is prompting discussions about the role of algorithms in shaping online content consumption and the potential for AI to both empower and undermine creative expression.
Take those on the YouTube channel FUNTASTIC YT, which hosts dozens of videos in which an animated kitten has a brief, often nonsensical, misadventure. In one, the kitten sits by a backyard swimming pool full of rainbow goo. "Dad, can I swim in this slime pool?" the kitty asks.
His buff feline dad then appears in the pool neck deep, unable to escape. "No son, I’m stuck. Please help me," he says.
And that’s the end of the vignette.
The video has all the hallmarks of being made using AI: It’s got colorful, simplistic animation and computer voiceovers. Still, though it barely has a hint of a plot, it’s funny, relatably goofy – and it’s been seen over 2 million times.
In others, the kitten rides on a blimp made of pancakes, or a car made of cola, or swims in a giant pool of gummy bears while his usually exasperated dad looks on.
To some critics, videos like these are an annoyance that clutter up people’s feeds, and fill the online landscape with low-effort or meaningless content. This criticism extends beyond the aesthetic qualities of AI slop, delving into the potential for these videos to contribute to information overload and the erosion of critical thinking skills. The sheer volume of AI-generated content can make it difficult for users to discern credible sources from fabricated ones, leading to the spread of misinformation and the reinforcement of biases. Moreover, the repetitive nature of many AI slop videos can desensitize viewers to more nuanced and complex forms of content, potentially hindering their ability to engage with meaningful narratives and critical analysis.
"I don’t think this video exists for any creative, any expressive, any informational or educational reason. It’s purely to be engaged with," said Adam Bumas, with the tech-focused newsletter Garbage Day. This sentiment reflects a broader concern that AI slop is driven primarily by algorithmic optimization rather than artistic intent. The focus on generating views and engagement can incentivize creators to prioritize quantity over quality, resulting in content that lacks originality, depth, and intellectual value. This can create a self-perpetuating cycle in which algorithms reward content that is designed to be addictive and attention-grabbing, while neglecting content that is more challenging or thought-provoking.
"AI is really superpowering spam," said Jason Koebler, a co-founder of the tech news website 404Media who has been following the rise of AI slop. "The whole point is to hit the algorithm in some way – to basically win the algorithmic lottery, get people to like, comment, share, and hopefully, go very viral." This highlights the economic incentives that drive the creation of AI slop. By leveraging AI to generate large volumes of content, creators can increase their chances of capturing the attention of algorithms and earning revenue through advertising or other monetization schemes. This can lead to a race to the bottom, in which creators compete to produce the most viral content, regardless of its quality or impact.
But Mark Lawrence I Garilao, who created those kitten clips and the channel, sees it differently. Garilao said making AI videos is creative and fun – and a way to use a new technology.
Garilao is a 21-year-old college student who NPR reached by phone in the Philippines, where he studies computer science. He said he produces one or two clips a day, all with a similar theme revolving around the kitten and his father. Each takes one or two hours to produce, using ChatGPT to render the characters, KlingAI to create video, and other software to edit. This reveals the accessibility of AI tools for content creation. With relatively little technical expertise, individuals can now generate videos that reach millions of viewers. This democratization of content creation has the potential to empower new voices and perspectives, but it also raises concerns about the potential for misuse and the need for responsible AI development.
"When I think of the story or what the dialogue would be, I would just – I would just sit there and think of a random one, which I find funny. That’s it," he said.
It’s mostly for entertainment, he said. But there’s also good money in it. YouTube owner Google pays channel owners through its AdSense program based on the number of people who watch the videos and see ads.
"The highest I made was in the month of May. I made $9,000 in just one month," Garilao said. For perspective, that adds up to more than a year’s salary in the kind of entry-level job he said he can expect when he graduates. This demonstrates the potential for AI content creation to generate significant income. While not all creators achieve such high earnings, the prospect of financial reward can incentivize individuals to invest time and resources in developing AI-generated content. This can further fuel the proliferation of AI slop and exacerbate the challenges associated with it.
Other channels churn out videos at a much higher rate, hoping to cash in on views.
Koebler, of 404Media, said the high volume of mass-produced AI slop is crushing other creators – like artists or photographers who work without AI – by diverting attention away from them. This raises concerns about the impact of AI on the creative economy. As AI-generated content becomes more prevalent, it can be difficult for human artists and creators to compete for attention and recognition. This can lead to a decline in the value of human creativity and a loss of opportunities for artists to earn a living from their work. Furthermore, the dominance of AI-generated content can stifle innovation and diversity in the creative landscape, as algorithms tend to favor content that conforms to established patterns and trends.
"I think that discoverability on the internet has already started to collapse," he said. "I think it becomes really hard to stand out when the primary arbiter of whether something is seen or not is an engagement algorithm." This underscores the power of algorithms in shaping online content consumption. Algorithms can amplify certain types of content while suppressing others, creating filter bubbles and reinforcing existing biases. This can make it difficult for users to discover content that challenges their perspectives or introduces them to new ideas. The reliance on engagement algorithms can also lead to a homogenization of content, as creators are incentivized to produce content that appeals to the algorithm rather than content that is authentic or original.
In some cases, AI slop can be more than an annoyance. Some of it is straight-up misinformation, like fake clips of celebrities rescuing people from the Texas floods in July. This highlights the potential for AI to be used for malicious purposes. AI-generated content can be used to spread false information, manipulate public opinion, and damage reputations. The ability to create realistic-looking videos and images makes it increasingly difficult to distinguish between authentic and fabricated content, posing a significant challenge to efforts to combat misinformation.
Other AI videos tap into trends. Garilao says his payday in May was supercharged because he added "Italian brainrot" meme characters to his cat videos. These are popular AI-generated characters, like Ballerina Cappuccina, a dancer with a coffee cup for a head, and Tralalero Tralala, a shark wearing Nike sneakers. This demonstrates the ability of AI to adapt to changing trends and preferences. By incorporating popular memes and cultural references, AI-generated content can become more engaging and relevant to viewers. However, this also raises concerns about the potential for AI to exploit cultural trends and perpetuate harmful stereotypes.
Social media platforms are recognizing the challenge of the onslaught of so much AI-generated content. But they aren’t necessarily banning it outright.
TikTok and Instagram are now labeling certain AI-generated content. Meta says it allows AI-generated content that meets community standards, and lets users personalize their Facebook feed and shape their experience on Instagram to avoid things they don’t want to see. TikTok says it has rules against AI deepfakes. These efforts to label and regulate AI-generated content represent a step in the right direction. By informing users that content has been created using AI, platforms can help them to make informed decisions about what to believe and share. However, labeling alone may not be sufficient to address the challenges posed by AI slop. Platforms may also need to implement stricter content moderation policies and invest in technology that can detect and remove harmful or misleading AI-generated content.
And YouTube recently tweaked one of its policies: It already barred people from making money off of "repetitive" content, and expanded that to the broader term "inauthentic" content.
YouTube says this was just a minor update, and directed NPR to a video by the company’s in-house "creator liaison" Rene Ritchie for details. "This is to clarify that the policy includes content that’s mass produced or repetitive, which is content viewers often consider spam," Ritchie said in the video.
It’s unclear what this change will mean in practice, though, according to Casey Fiesler, a professor at the University of Colorado who studies tech policy and ethics.
"There’s nothing about this change that explicitly suggests that it’s targeting AI-generated content," she said. This highlights the ambiguity surrounding the regulation of AI-generated content. While platforms may express concerns about the potential for AI to be used for harmful purposes, they may be hesitant to implement policies that could stifle innovation or limit the growth of their user base. This can create a regulatory gray area in which AI-generated content proliferates unchecked, posing challenges to content quality, discoverability, and the overall online ecosystem.
At the same time, YouTube is also encouraging video creators to use AI through features on its app that do things like create fake backgrounds. This reflects the complex relationship between platforms and AI. On the one hand, platforms are grappling with the challenges posed by AI-generated content. On the other hand, they are also investing in AI technologies that can enhance the user experience and drive engagement. This can create a tension between the desire to regulate AI and the desire to leverage its potential benefits.
Koebler, of 404Media, says he doesn’t think social media platforms are really taking a hard stand against AI content, in part because they’re all invested in it.
"I think that they think that maybe this stuff is annoying now, but in five years, they imagine a world where most content on the internet is generated by AI, but it’s content that people are going to want to see," he said. This suggests that platforms may be viewing AI slop as a temporary nuisance rather than a fundamental threat. They may believe that AI technology will eventually mature to the point where it can generate content that is both engaging and high-quality. However, this optimistic view may overlook the potential for AI to exacerbate existing problems in the online ecosystem, such as misinformation, filter bubbles, and the erosion of critical thinking skills.
And in the meantime, Garilao says plenty of people do want to see his AI videos. His channel has nearly 600,000 subscribers and his videos have collectively racked up nearly 500 million views.
Comments on his videos accusing him of producing AI slop used to bug him, he recalled: "At first I was like, ‘Oh, man, why do they hate my content?’"
Now, he said, he gives those comments a heart emoji, and thanks people for their engagement. The more, the better. This illustrates the disconnect between critics who decry AI slop and viewers who find it entertaining. While some may view AI-generated content as low-effort or meaningless, others may appreciate its simplicity, humor, or novelty. This highlights the subjective nature of taste and the challenge of defining what constitutes "good" or "bad" content.
The rise of AI slop presents a complex challenge for social media platforms, content creators, and consumers alike. While AI has the potential to democratize content creation and empower new voices, it also poses risks to content quality, discoverability, and the overall online ecosystem. Addressing these challenges will require a multi-faceted approach that includes stricter content moderation policies, investments in AI detection technology, and efforts to promote media literacy and critical thinking skills. Ultimately, the goal should be to foster an online environment that is both innovative and informative, where AI is used responsibly and human creativity is valued.
Note: Google, which owns YouTube, and Meta are financial supporters of NPR.