Entertainment
How content moderation changes our language, silences marginalized voices
Content moderation is changing the way we speak to each other, for better or worse. On TikTok, we say “le dollar bean” instead of “lesbian” because of a perceived ban on the word; we refer to suicide as “unaliving” and sex as “seggs.” But what works on one platform doesn’t always work on another, and this lexicon can sound clunky and bloated when used across social media sites — and even more inchoate when it graduates to offline conversations.
“People are more aware of how algorithms work,” Amanda Brennan, a meme librarian and senior director of trends at the digital marketing agency XX Artists, explained to Mashable. “So people will modify their language to subvert being shadowbanned or hidden from other users because they’re under the impression that using whatever words or whatever language will get them hidden. And then as more people watch content like that, they will start adapting these words into their own language.”
It’s not surprising that language changes with the influence of online content. New forms of communication have that effect. But content moderation, with all its fluidity and platform-specific nuances, has the potential to force our language to evolve at an accelerated rate, often silencing marginalized communities.
How the internet changed language
“We’re seeing it happening very rapidly and very visibly right now, but it’s not a new phenomenon even in the space of the internet,” Kendra Calhoun, a postdoctoral fellow in the department of anthropology at UCLA, told Mashable. Think about how, more than a decade ago, we started talking about dogs. This shift had nothing to do with content moderation but everything to do with meme speak. People started posting about Doge, a photo of a shiba inu with words like “wow” and “so scare” in colorful comic sans printed on it. In 2012, there was a Tumblr post with a picture of a dog that gave users three options: pet doge, snuggle doge, feed doge. This kind of language mirrored that of its predecessor, the I Can Has Cheezburger cat, in what we know as lolspeak (a form of internet speak that rearranges syntax and often replaces an s with a z).
And that evolved into what we now know as dog speak, making its debut in a 2017 NPR article that detailed the new lexicon: calling dogs doggos and puppers; saying their feet tippy tap when they are excited; spotting a canine’s tongue and exclaiming “mlem.” It became such a significant part of our everyday vernacular that Merriam Webster considered adding “doggo” to the dictionary.
The phenomenon is called linguistic accommodation, in which someone changes how they speak by copying the people they’re interacting with in order to fit in. It’s not specific to the internet, and it isn’t new. Wayne Pearson first used the acronym LOL in the 1980s in Calgary, and in 1990, someone first typed LMAO during an online game of Dungeons & Dragons. OMG first popped up in 1994 on an online forum about TV soap operas, according to the Oxford English Dictionary. In more decades, we started writing in all lower-case, using emoji, and shortening the way we speak to fit character limitations on social media platforms.
When we speak to people face to face, we pick up non-verbal cues that we can’t always track online — the internet created its own means of communication, substituting body language for in-text cues like emojis and ellipses.
David Crystal, linguist and author of the book Internet Linguistics: A Student Guide, said this is as a “natural reaction to communicating online, instead of verbally.” When we speak to people face to face, we pick up non-verbal cues that we can’t always track online — the internet created its own means of communication, substituting body language for in-text cues like emojis and ellipses.
Gretchen McCulloch, the author of Because Internet: Understanding the New Rules of Language, told The Atlantic that the changes to our communication is partially due to the fact that we “no longer accept that writing must be lifeless, that it can only convey our tone of voice roughly and imprecisely, or that nuanced writing is the exclusive domain of professionals.”
“We’re creating new rules for typographical tone of voice,” McCulloch told The Atlantic. “Not the kind of rules that are imposed from on high, but the kind of rules that emerge from the collective practice of a couple billion social monkeys — rules that enliven our social interactions.”
As McCulloch explains in her book, writers like James Joyce or E. E. Cummings had already broken the rules of grammar, eschewing capitalization and punctuation, with similar goals in mind. Yet, the internet made it mainstream.
The content moderation shift
Now, 10 years after the boom of doggo-speak, we all think dog speak is pretty cringe. That’s how language tends to evolve online: what’s mainstream for one generation gets modified by another. However, content moderation adds another key element to this shift, in which we aren’t actually in charge of how our language changes.
“Content moderation has this very specific top down control of what people can produce,” Calhoun said. “It’s created these new restrictions and barriers and forced people to find new ways to express things without saying certain words or saying words in particular ways or writing them down instead of saying them out loud. It introduced a new very intentional linguistic creativity and language change.”
It’s led people to start using substitutes for words that are controlled under content moderation guidelines, or that they suspect are. In truth, the perceived moderation of a word is just as powerful as the actual moderation of a word. And it changes language fast.
Each time a new platform emerges, a fresh set of words and topics are moderated, new audiences and creators come into popularity, and language evolves. It was a force we saw sweep over the internet for decades, but nothing has had an impact on it quite like TikTok. This is likely because the platform is so unique — the algorithmic For You Page, or FYP, ensures that you’re seeing videos from creators you didn’t purposefully seek out, and the language is a mix of both oral and text, unlike Twitter, Facebook, Instagram, Snapchat, or any other platform.
On TikTok, people say “SA” instead of “sexual assault” and “spicy eggplant” instead of “vibrator”; sex workers became “accountants”; and they use words like “panini” and “panda express” to talk about the pandemic, as the platform down-ranked videos mentioning the pandemic by name in an effort to combat misinformation. Unlike other mainstream social platforms, the primary way content is distributed on TikTok is through an algorithmically curated For You Page, meaning your number of followers doesn’t directly correlate with number of eyes on your video. As Taylor Lorenz points out in The Washington Post, that shift has “led average users to tailor their videos primarily toward the algorithm, rather than a following, which means abiding by content moderation rules is more crucial than ever.” She adds that early internet users similarly used “leetspeak” to bypass word filters in chat rooms, image boards, online games and forums. This language gets integrated into other platforms and our offline lives, too. Nowadays, it isn’t rare to hear someone say “unalive” on Twitter or on a subway platform.
The language is often short lived
Despite its ubiquity, this kind of content moderation jargon is ephemeral because it’s not making something clearer to understand. For instance, activist movements change language all the time with the specific intent of making it more clear, like saying “people who can become pregnant” instead of “women” when speaking about abortion rights.
The real difference between these content moderation-fueled changes and other linguistic changes is that they’re made to avoid scrutiny, not driven by a need for community or to make language more inclusive or clearer.
But algorithmic content moderation systems are more pervasive on the modern internet, and often end up silencing marginalized communities and important discussions.
Content moderators, typically regulated by the platform itself, tell us what we can and cannot say in online spaces. Theoretically, that’s meant to ensure the safety of content on each platform by prohibiting words that could lead to damage, like proliferating words associated with anorexia or white supremacy. But algorithmic content moderation systems are more pervasive on the modern internet, and often end up silencing marginalized communities and important discussions. It can demonize any conversations about those topics, including those that intend to help people out of difficult situations.
Often, it can feel like the theft of our language — a theft that leaves marginalized communities struggling to find the words that tether them to safety. Moreover, it’s impossible to talk about internet language without considering the role race plays in how we speak online. So much modern slang is actually Ebonics, or AAVE (African American Vernacular), and online language conforms to the same problem that IRL language so often does. As Sydnee Thompson pointed out in BuzzFeed News, “AAVE is a living language that has evolved over centuries, but the ubiquity of the internet has made many aspects of the dialect more accessible and encouraged others to adopt it for their own use. And it has proven to be extremely popular.”
Who is affected most?
Tailoring language to avoid scrutiny predates the internet, from religions refusing to say the devil’s name to activists using code words to discuss taboo topics, like how people in China use code words to speak about their government. But the internet brings up something new: You can’t control the audience.
“These digital spaces are very important for especially minority communities and the reality of them having to find new ways to express these ideas is important to talk about,” Calhoun said. Studies show that minority groups are the most affected by content moderation because “these guidelines try to make a one size fits all set of rules but don’t take into account the different starting points and the different realities for different communities.”
Euphemisms are especially common in radicalized or harmful communities. Pro-anorexia eating disorder communities have long adopted variations on moderated words to evade restrictions. One paper from the School of Interactive Computing, Georgia Institute of Technology found that the complexity of such variants even increased over time. Last year, anti-vaccine groups on Facebook began changing their names to “dance party” or “dinner party,” and anti-vaccine influencers on Instagram used similar code words, referring to vaccinated people as “swimmers.”
“If you want to just talk about your day-to-day, but your day-to-day includes experiences with anti-Black racism, then you have to think, ‘how can I say this while avoiding this word and not getting flagged or shadowbanned or reported,’ whereas other people can just talk and talk about their day and not have to worry about any of those things,” Calhoun said. “That’s where linguistic practices intersect with issues of power and ideology.”
And that can lead to an increase of activism burnout. People have to explain why their videos are being taken down, brainstorming different ways to approach conversations. But this isn’t a cycle that has to continue to repeat itself.
…tech platforms will put out rules and guidelines but those rules don’t typically apply to everyone equally.
Ángel Díaz, a visiting professor at USC, co-authored the Brennan Center study “Double Standards in Social Media Content Moderation,” which unpacks why we might want to challenge the way our system works with regard to content moderation.
“These are [social media] companies that very often are driven more than anything by desires to align themselves with power and to develop a more favorable regulatory environment,” Díaz told Mashable. “Regardless of what a law does or doesn’t require, they are going to listen when the government asks them to do certain things and when particular political parties pressure them in a given way.”
Díaz’s report points out that tech platforms will put out rules and guidelines but those rules don’t typically apply to everyone equally. Think about how often former President Donald Trump broke the rules on Twitter but wasn’t deplatformed until he left office. Díaz argues that this is where we should guide our thinking going forward.
“Companies have made choices to moderate the content of people from marginalized communities in a much more haphazard way,” Díaz said. “There is one set of rules for people that are more marginalized and then there’s a more measured approach that is reserved for more powerful institutions and actors. And we wanted to focus the next steps on challenging that relationship between tech companies and power.”
When people’s communication is intrinsically connected to the whim of a social media platform’s moderation tactics, language suffers — and communities can suffer along with it.
-
Entertainment7 days ago
WordPress.org’s login page demands you pledge loyalty to pineapple pizza
-
Entertainment6 days ago
‘Mufasa: The Lion King’ review: Can Barry Jenkins break the Disney machine?
-
Entertainment6 days ago
OpenAI’s plan to make ChatGPT the ‘everything app’ has never been more clear
-
Entertainment5 days ago
‘The Last Showgirl’ review: Pamela Anderson leads a shattering ensemble as an aging burlesque entertainer
-
Entertainment6 days ago
How to watch NFL Christmas Gameday and Beyoncé halftime
-
Entertainment5 days ago
Polyamorous influencer breakups: What happens when hypervisible relationships end
-
Entertainment4 days ago
‘The Room Next Door’ review: Tilda Swinton and Julianne Moore are magnificent
-
Entertainment3 days ago
CES 2025 preview: What to expect