Hate Speech: How to Talk to Your Child About Acts of Violence
With protests in support of black lives sweeping the country comes wall-to-wall coverage of civil unrest across news networks and social media alike. Dinner tables, Zoom calls, and comment sections have all become hotbeds of heated conversation. And while some discomfort is expected as we talk through difficult issues, there is one form of communication whose only purpose is to harm: hate speech.
Most adults recognize hate speech immediately when they see it. Kids, however, might find it more difficult to identify. So much of their lives is spent in a digital ecosystem that parents can’t always access. Memes containing hate speech often look like jokes, and while a child might understand them to be in poor taste, they may not grasp how they can lead to acts of violence. As adults, we have an important role to play in their journey toward becoming conscious digital citizens.
This blog post is focusing on what hate speech is, how it works, and what parents can do to help their kids fight against it. Use the links below to navigate to each section.
- What is Hate Speech
- How Hate Speech Leads to Acts of Violence
- Where Kids Encounter Hate Speech Online
- How to Talk to Your Kid (And Steps They Can Take)
- Lessons From History
- Additional Resources
What is Hate Speech?
There’s no set definition for hate speech, but it is generally recognized as a form of expression that is derogatory or aggressive towards people of a particular group. This can include race, religion, ethnicity, gender, gender identity, sexual orientation, nation of origin, disability, and more. Hate speech can also take many forms, ranging from Facebook comments to wall graffiti to verbal abuse on the street. Any word, phrase, symbol, or collection of ideas can be hate speech — as long as its purpose is to express hatred against a shared identity.
Hate speech is designed to make people feel less than human. If someone believes a group is so fundamentally different that they’re not entitled to basic human decency, that person will feel justified in subjecting them to a range of abuses — including harassment, discrimination, violence, and taken to its logical conclusion, even genocide.
Texting a racist meme to a friend is not the same as sending a threat of violence to an immigrant through Instagram direct messages. But even less serious forms of hate speech help to support the general goal of making it tolerable — or even popular — to demonize a group. It might look mild sometimes, but like a child testing the limits of a new babysitter, it’s always looking to push the boundaries of what thoughts and behaviors people are willing to accept. If hate speech goes unchecked, it can escalate into violence at an alarming speed and scale.Back To Top
How Hate Speech Leads to Acts of Violence
Kids know when they’re cyberbullying someone, and they understand the difference between accidentally hurting someone’s feelings and attacking them on purpose. But they may still have yet to learn that mean-spirited comments aren’t the end of the line — especially on the internet.
A 2019 study showed that an increase in online hate speech leads to an increase in crimes against people who are minorities. “Previous research has already established that major events can act as triggers for hate acts. But our analysis confirms this association is present even in the absence of such events,” said Professor Matthew Williams, Director of HateLab. “The research shows that online hate victimization is part of a wider process of harm that can begin on social media and then migrate to the physical world.”
Another thing that sets hate speech apart is that there are organized efforts to spread it. In 2019, the Southern Poverty Law Center (SPLC) tracked 940 active hate groups in the U.S. These groups — and the “self-radicalized” people they inspire — rely on specific techniques for getting people to adopt their messages. Hate speech is laundered into the mainstream by being repackaged into terms that others find less offensive, and memes are just one example. Many hateful memes kids encounter online are actually images and ideas crafted by extremists — like Pepe the Frog and (((echos))). These memes act like malware carrying a viral form of bigotry.
Hate speech is also used to indoctrinate. Racism, xenophobia, homophobia, misogyny, and other forms of intolerance are all learned perspectives. People don’t simply wake up one morning with hatred in their hearts. Instead, extremist points of view exert a low gravitational pull over a long period of time, drawing people closer and closer to the source. Eventually, the pull becomes too much to escape, and that is when mere exposure crosses into indoctrination. Like the frog in the pot being slowly brought to a boil, people don’t realize what’s happening until it’s too late.
Where Kids Encounter Hate Speech Online
Unfortunately, hate speech can exist almost anywhere. Even though you may be familiar with the kinds of harmful messages that are sent on platforms like Reddit and 4chan, you may not expect that your kid could encounter hate speech when they’re browsing YouTube to find a compilation of puppy videos or joining their buddies for a friendly Fortnite session. But each of these platforms — and many more as well — can expose your kid to threatening conversations.
According to Wired Magazine, Facebook “removed 9.6 million pieces of content it deemed hate speech in the first quarter of 2020.” Harassment and hate speech have long been issues on Twitch, a live-streaming platform for gamers. Even Instagram has been home to extremist content. In short, wherever words and pictures are posted or shared — whether through comments, in groups, or on pages — kids can encounter hate speech.
Social media platforms are constantly filling with new content, so it can be easy for kids to forget that a hateful tweet or Instagram comment doesn’t simply disappear into the feed. The PEW Research Center reports that a quarter of black Americans have been the target of online harassment due to their race or ethnicity. According to census data, that’s around 11 million people — and that is only one demographic. A kid might feel as if a post here and there doesn’t make much of a difference, but in a misguided attempt at humor, they can hurt far more people than they might think.
Fortunately, there are some steps you can take to help keep your kid safe on Instagram, TikTok, and more. Our Barkomatic guide also lets you input all of the social media apps and platforms your kids use and gives you customized instructions for setting up parental controls. As always, ongoing conversations with your child about social media are also extremely important.
One of YouTube’s goals is to entice users to click on more videos, which it does by suggesting recommendations in a column to the right of every video screen. They populate this spot with increasingly extreme content, so intrigued viewers will continue to explore new videos. So what started as an innocent video search might eventually lead your child to an evil that they didn’t know existed.
Of course, some channels on YouTube — such as the group of “Infowars” accounts that are routinely banned from the platform — regularly devolve into hateful ideas and language. Others aren’t as explicit in perpetuating this kind of content. But creators aren’t the only ones who express derogatory thoughts on YouTube. Viewers also express violent language in the comments section. In fact, in April through June of 2019 alone, YouTube removed 500 million comments due to hate speech.
You can help keep your child from encountering harmful videos by turning on YouTube’s Restricted Mode, which allows you to filter out explicit search results. If you want to make sure your kid doesn’t turn off Restricted Mode, Bark’s screen time feature can help with that, too. If your child is 12 or younger, you can also create a YouTube Kids account for them, instead of letting them watch videos directly on the broader platform. While this isn’t a perfect solution, it can help prevent your kid from witnessing or experiencing hate speech on YouTube.
While playing a video game in order to explore another world can be fun, online gaming is a notorious hotbed of racist, sexist, ableist, and homophobic language. Steam, an online storefront for PC games, has a history of users promoting white supremacist language, symbols, and propaganda. Roblox, a popular platform for building games for other people to play, also encountered an extremist content problem as some gamers filled their profiles with neo-Nazi imagery and language.
While it’s critical to be aware of some of the content your kid might encounter, that doesn’t mean you have to choose between hiding the controllers forever and giving them free rein and hoping for the best. We’ve compiled a list of your parental control options on gaming platforms, games, and chatrooms, to help you keep your kid safe as they play.
If you have any familiarity with Reddit, you probably know that the platform has a history of issues with racism, violent language, and other forms of hate speech. Although several subreddits (topic-based message boards) have been banned after users incited violence and posted derogatory language, hate speech is still quite common.
Hopefully, your child hasn’t discovered 4chan and 8chan, which can be described as the extreme and even more extreme versions of Reddit, respectively. On 8chan, several mass shootings have even been announced in advance on the site by way of the shooters’ manifestos. These two forums in particular are widely understood to be breeding grounds for extremist ideologies.
Remind your kid that even if they feel like they’ve become best friends with another user on an internet forum, unless they know that person in real life, it’s possible they aren’t who they say they are. Let your child know they should never feel obligated to continue chatting with someone who makes them — or someone else — feel unsafe by attacking them with hate speech.
How to Talk to Your Kid
Whether your child has experienced hate speech, has participated in harmful conversations themselves, or isn’t yet aware that hate speech exists, it’s important to give them the opportunity to express their emotions.
Start by asking your kid if they can think of a time when someone said something that hurt their feelings, and ask them to share about how that interaction made them feel. This can help you guide the conversation towards a discussion about what hate speech is and the very real impact it can leave on a person.
They might intellectually know that the Holocaust was evil, for example, but may not understand yet that telling an anti-Semitic joke is spreading those same evil ideas. Explaining that the lives we’re living now will also become history can help them understand the connection between the protests they’re seeing today and the broader social conditions that led people to organize them in the first place.
Our kids have an opportunity to make different choices than the people they’ve learned about in school, who have perpetuated racism, participated in genocide, or otherwise enacted violence against targeted a group of people.
If your child is younger, you might mention that a homophobic term might seem like no big deal, but it can actually inflict a great deal of harm on someone else. If they’re an older teen and emotionally ready for a more in-depth conversation, you might talk about how engaging in hate speech is often a precursor to committing acts of violence against other people — even if they don’t consider themselves capable of doing so.
If your child still isn’t convinced that the words they use and the jokes they tell really matter, remind them that the messages they share online can come back to haunt them in the future. Leaving a racist comment on a YouTube video takes only a few seconds to type out, but can affect their ability to get into college, their job prospects, and the friendships they’re able to make for years to come.
Concrete Steps Your Kid Can Take
- Report it. Hate speech is in violation of the Terms of Service for most websites, and most social media platforms have official reporting tools. Walk your child through when and how to report hate speech.
- Block it. Your kid should not feel obligated to continue to follow anyone who uses hate speech.
- Don’t share it. The more people who encounter the hate speech, the more people who are affected. It can also make hate speech seem that much more common, which means people may tune it out — which is exactly what extremists want.
- Call it out. Whenever it’s safe to do so, calling out hate speech is an important part of digital citizenship. Hate speech signals to the community that this is a space where the targeted group is not welcome. If no one challenges that message, that idea is left to stand. It’s especially important for people who are not a part of the targeted group to confront hate speech directly whenever they see it online.
Lessons From History
It’s easy to assume that if there’s no violence, there’s no problem. “It’s just a harmless joke.” But what makes hate speech so sinister is that it doesn’t always lead directly to violence — it leads to indifference first. When people learn not to take hate speech seriously, that allows others to act on their hatred without fear of being punished. This is why it’s so important to help kids connect history with what they see in their own lives.
The history taught in textbooks or referenced on the news is not as far into the past as it might seem. Many of our grandparents lived in the time before the Civil Rights Act of 1964. Survivors of the Holocaust are still alive today. Both realities depended on a broad cultural acceptance of injustice in order to exist. So when people spread hate speech online, they’re circulating the lifeblood of some of the worst atrocities in human history.
Our own kids may never meet a Holocaust survivor. But they’re growing up in a decade that has seen countless acts of violence. As parents, teachers, coaches, counselors, and mentors, we have to help the kids in our lives understand the difference between a joke and a very real danger.
Bark is a comprehensive online safety solution that empowers families to monitor content, manage screen time, and filter websites to help protect their kids online. Our mission is to give parents and guardians the tools they need to raise kids in the digital age.