The Bark Blog                                                                                                                                                                                                                                                                
illustration of girl having her picture taken Culture & Media

How to Talk About Deepfakes with Your Child

Haley Zapal  |  February 08, 2024

illustration of girl having her picture taken

Deepfakes — or computer-generated videos of humans with real or partly real features — are a growing concern for people around the world. Some are getting so good that they can fool a casual video watcher who’s just scrolling through social media. Even well-informed adults can struggle with this deceptive technology, so it’s important for families to start explaining what deepfakes are to their children.  

As parents have learned to teach digital citizenship concepts to kids (like understanding how to spot a trusted website or news source), imparting knowledge about how to spot potentially fake videos will become simply another part of growing up in the digital age. In this blog post, we’ll discuss the dangers that deepfakes pose as well as provide some conversation starters so you can start talking about this important topic. 

Recent News Brings Attention to Deepfakes

In January, a viral AI filter on TikTok allowed users to turn themselves into Taylor Swift, similar to how you can use a puppy dog face or angel filter. This digitally created face would allow you to move your head, talk, and make facial expressions like the popular pop singer.

Most TikTok users just had a laugh, but on 4chan, a challenge started to see who could create a Taylor Swift deepfake in order to make porn. These NSFW videos were shared widely, and it brought the issue to the attention of mainstream news outlets. Taylor Swift’s celebrity and the problem of “non-consensual intimate images” introduced the concept to a wider audience, but this violation has been happening to many people all over the world for a while. Law enforcement and legal action have been slow to catch up to this technology, though.

What Exactly Are Deepfakes?

So, what do people mean when they use the term “deepfakes” ? This new-ish technology uses artificial intelligence techniques to create fake, computer-generated images and videos of real people. It goes far beyond airbrushing, photoshopping, or traditional video editing software. Using real subjects allows deepfakes to simulate real human movements and facial expressions. The technology is getting better every day, and it’s already to the point where it can be hard to tell the difference between an authentic video and a deepfake. 

If you’ve never seen a deepfake (though you probably have unwittingly!), check out this popular TikTok deepfake account of Tom Cruise. It’s not perfect, but you’ll be surprised by how realistic it looks. The actor is using a filter to recreate Tom’s face, but he’s doing a vocal impression to imitate his voice and speaking cadence. 

The Dangers of Deepfakes

Deepfakes aren’t just fun TikTok filters — they can have devastating real-life consequences for adults, children, and even society at large. 

Fraud

Just this month, a company suffered a serious financial fraud after a deepfake heist enabled a scammer to walk away with millions of dollars. The scammer posed as the company’s chief financial officer and asked employees to transfer company funds to a bank account, and the workers followed the fake directions. This is a large-scale version of fraud, of course, but it shows how trust and relationships can be used against people when dealing with deepfakes. 

An example of a smaller-scale — but still deeply concerning — version of deepfake fraud is fake voice scams. Scammers will use a small snippet of a victim's voice (usually easily found on social media) to create an AI-generated audio of them saying they've been "kidnapped" or that they're "stranded somewhere" and need money to get back home. Then, scammers call the victim's loved ones with this script and convince them to send money to save the victim.

Reputation destruction

The Taylor Swift fiasco we mentioned earlier is another big example of the potential for reputation destruction by deepfake technology. A recent study found that 90-95% of deepfake videos are now nonconsensual pornographic videos and, of those videos, 90% target women—mostly underage. So it's not just celebrities, average teens are having their likenesses used for fake nudes — sometimes by their own classmates — and suffering severe backlash that comes when these images or videos are shared online.

But it doesn't have to be revenge porn, either — a deepfake video of someone could simply show a person saying anything that goes against their values or their community, harming how they’re perceived by others. 

Opinion manipulation 

This is one of the scariest dangers deepfakes pose, though it may not affect kids as much because it often belongs to the world of politics and adults. Imagine a deepfake of the president or a prominent politician saying something false, misleading, or against their values. These types of videos, when widely shared, could affect how people vote, trust elected leaders, and more. 

Conversation Starters for Families

We’ve gathered a few ways to talk about deepfakes with your child so they can learn about them and understand some of the dangers they pose. For these questions, we recommend talking to kids 12 and above, as the issue can be a little complex. 

Ask, “Can you tell me some ways to spot if a video is fake?”

Kids today are generally more tech-savvy than we are, so odds are your child is going to be a whiz at figuring out a video is fake — even if they can’t articulate exactly why. That’s where you come in though. Here are just a few of the things you can point out to explain why a video may not be real:

  • Blurred or fuzzy borders of the person
  • Strange or misplaced shadows
  • Uneven or inconsistent skin tones
  • Abnormal blinking patterns
  • Robotic or non-human vocal patterns

This website has a super useful checklist and practice videos to help you better understand all of the potential gives and tells of deepfakes. 

Ask, “What do you think about that Taylor Swift filter that went viral on TikTok?”

This is an age-appropriate question to talk about the issue (not the porn filter, but the face-only filter from TikTok). No matter what your child answers, follow up with questions like “Do you think there are any potential issues with someone pretending to be someone else online” and “What if someone believed it was actually Taylor Swift?” In true kid fashion, some may try to minimize the negative effects of a filter. But follow the line of questioning to see where it goes. 

Ask, “Why do you think someone would share a fake video as someone else?”

There are positive answers to this question, as sometimes people just want to have fun with filters or be creative with technology. It’s important to talk about the fact that deepfakes aren’t all bad. But remember to bring up the negative reasons, too. Talk about the potential for fraud, the spreading of fake information, just plain bad intentions, and the desire to hurt someone’s reputation.

Ask, “Would you watch this video with me and help me point out why it’s not a real video of Tom Cruise?”

Using a harmless fake video of this popular action star (even if kids may think he’s uncool) can allow you to sit down with your kid and examine, discuss, and point out the flaws in a deepfake. Even though eventually deepfakes will be almost impossible to spot, right now there are still glaring tells, like blurry edges, stranger colorings, and a general sense that something’s not right. Teach your child to rely on their gut when it comes to videos like these — if they think something’s not right, it’s probably because it isn’t. 

How Bark Can Help

Bark’s monitoring technology — which comes with our downloadable app for children’s iOS and Android devices as well as our kid-focused Bark Phone — scans online activities like texts and social media for signs of potential dangers and send parents important alerts. 

Some of the potential dangers Bark can detect include sexual images & videos, bullying content, signs of suicidal ideation and more. Whether your kid’s exposed to real or fake images online, Bark can help you keep them safe online and in real life. 

Bark helps families manage and protect their children’s digital lives.

mother and daughter discussing Bark Parental Controls