The Bark Blog                                                                                                                                                                                                                                                                
computer with AI on the screen and green goo on it Digital Tech & Trends

What Is AI Slop and What Do Parents Need to Know?

The Bark Team  |  December 16, 2025

If it feels like the internet has gotten noisier lately, you’re not imagining it. A growing portion of what you see online is being entirely created by AI. While some of it can be entertaining or educational, there’s also a rising wave of AI slop — low-quality, mass-produced posts designed to grab attention quickly. According to a recent Capgemini report, up to 71% of images shared on social media are AI-generated, a shift that’s happening faster than most families realize.

The good news? With a little education and help from Bark, you can stay on top of the changing tech landscape and safeguard your family. Below, we explain everything from what AI slop is and how it’s created, to what dangers it presents and how it’s affecting kids.

What Is AI Slop?

Coined by tech journalist Casey Newton in late 2024, “AI slop” refers to the recent flood of low-quality, AI-generated content that’s everywhere. It’s often created quickly and at scale to drive clicks and engagement.  AI slop can turn up anywhere, from your Google search to your teen’s For You page on TikTok. Because AI content isn’t always required to be labeled, it can be difficult to spot. But no matter where you find it, AI slop typically shares a few common traits:

  • Clickbait-style hooks: Sensational headlines or captions meant to spark shock or curiosity, not understanding.
  • Shallow or recycled information: Confident-sounding content that lacks depth, context, or original insight.
  • Small but telling mistakes: Inaccurate details, contradictions, or claims that don’t quite add up.
  • Repetitive language or visuals: The same phrases, formats, or images reused across multiple posts or accounts.
  • Uncanny images or videos: Visuals that look polished but off, especially with odd hands, faces, or movements.
  • No clear source: Little transparency about who made the content or where the information came from.

How Is AI Slop Created?

One reason AI slop has become so widespread is the growing availability of advanced text-to-video tools like Sora, which can generate realistic-looking videos from simple written prompts. Since its app launch in September 2025, Sora has surpassed 4 million downloads and has seen millions of videos created. Image generators like Midjourney and Adobe Firefly can also produce polished images just as quickly, with over 1 billion images created since 2023. When paired with social platforms that reward frequent posting and high engagement, these tools make it easy to churn out large volumes of attention-grabbing content, often with little basis in truth, context, or originality.

Beyond any single platform, AI slop is also fueled by a broader range of generative-AI tools, including text-to-image and AI writing programs. Together, they make it possible to create posts, videos, and articles in bulk, contributing to the flood of repetitive, low-quality content now filling search results and social feeds.

AI-Generated Content Dangers Parents Should Know About

AI-generated content can look polished and convincing, which is why tech leaders warn it poses unique challenges for kids and teens. As videos and text become more realistic and more common, young people may struggle to trust what they see online—and once that confusion sets in, a number of dangers can follow.

Deepfakes and manipulated media

When kids can’t reliably tell what’s real, they’re more vulnerable to deepfakes, or AI-generated content that shows people saying or doing things that didn’t happen. Researchers at the University of Nevada, Reno, note that these tools are becoming easier to use and harder to detect, increasing the chance teens will believe or share fabricated content.

Bullying and peer-targeted abuse

The blurring of reality has real social consequences. The National Education Association reports an uptick of cases where AI-generated images and videos are used to embarrass, harass, or target young people, including the creation of fake explicit images that can spread quickly and be difficult to remove.

Exposure to harmful or inappropriate content

The American Psychological Association warns that generative AI systems can produce content related to self-harm, eating disorders, or other sensitive topics, sometimes without proper context or safeguards. For teens who are already vulnerable, this exposure can be especially harmful.

False information shaping beliefs and decisions

When AI-generated misinformation looks polished and authoritative, it can influence how teens think about health, relationships, and the world around them. Studies show repeated exposure to false information—even when unintentional—can shape beliefs and decision-making over time.

How AI Slop Affects Kids and Teens

When kids are repeatedly exposed to content they can’t fully trust, it can quietly affect how they process information and emotions. Experts note that constant uncertainty about what’s authentic can lead to skepticism, anxiety, or emotional detachment, especially for younger users who are still forming their sense of reality and trust. Over time, this can make kids less confident in their own judgment or more likely to disengage altogether, assuming everything online might be exaggerated or fake.

The sheer volume of AI slop can also be overwhelming. Because AI makes it easy to produce endless videos, images, and posts, kids may face a nonstop stream of attention-grabbing content with little substance. 

Early studies warn that this kind of overload can contribute to shortened attention spans and difficulty focusing on more meaningful content. When kids interact directly with AI tools — by generating images, videos, or using features that insert their likeness into content — there are additional concerns around privacy and reliance on AI instead of developing creativity and critical thinking.

How to Talk to Your Kids About AI Slop

You don’t need to be a tech expert to talk to your kids about AI slop. Simple, ongoing conversations can help them understand what they’re seeing online and build healthy habits around AI-generated content.

  • Explain AI slop in simple terms: Let kids know that some pictures and videos online are made by computers, not real people, and they aren’t always accurate or trustworthy.
  • Build media-literacy habits: Encourage them to pause and ask questions like: Does this make sense? Does it look real? Who made this and why? You can also encourage them to use this checklist to spot and recognize deep fakes.
  • Set clear boundaries: Talk about when and how AI tools are okay to use, including what’s age-appropriate and whether they should use them with an adult.
  • Emphasize empathy and respect: Remind kids that even when content is AI-generated, real people can still be affected—and using someone’s image or voice without consent isn’t okay.
  • Encourage creating, not just consuming: If kids want to use AI, help them see it as a tool to support creativity and learning, not a shortcut that replaces effort or original thinking.

How Bark Can Help

As AI-generated content becomes harder to spot and more common in kids’ feeds, parents don’t have to navigate it alone. Bark helps families stay informed by monitoring for potential online risks, surfacing concerning content, and giving parents conversation starters that make tough topics easier to discuss. Explore Bark’s tools to find what works best for your family. 

Bark helps families manage and protect their children’s digital lives.

mother and daughter discussing Bark Parental Controls