For years, Andrew Tate marked a small, niche territory for himself on the internet—common ground for subgroups like men’s rights activists and incels—where misogyny runs rampant. But in July 2022, that small territory expanded rapidly, and his videos were suddenly everywhere.
He was being Googled more often than former President Donald Trump and Kim Kardashian, and, even more impressively, he managed to get the attention of nearly every social media company: TikTok, Instagram, Facebook, and YouTube all took down his accounts that summer on the grounds that his content repeatedly violated their policies on hateful speech and misogyny. And yet, even with Tate awaiting trial and booted off of most major social media platforms, young boys and men continued to repost and spread his misogynistic agenda.
So why weren’t the platform’s actions enough to reduce his influence? The problem is that when platforms moderate content like Tate’s purely in relation to their policies on hate and misogyny, they address the content reactively. But reactive approaches alone are not enough. Instead, platforms need to recognize that actors like Tate are able to spread their hateful messages effectively by using manipulative tactics popularized by purveyors of disinformation; that is to say, artificially inflating their popularity and then capitalizing on it. To effectively counteract the influence and reach of actors like Tate, it’s equally important to borrow from the disinformation defense playbook and arm users to recognize they’re being manipulated before they have been successfully indoctrinated.
Tate grew and maintained his social media presence by cultivating an army of copycat accounts and content diffusers, including students at Hustler’s University, a private online “academy” run by Tate and focused on how to make money. Students were paid to refer new members and promote Tate’s interview bites to susceptible audiences via TikToks, Instagram reels, tweets, and Facebook reposts.
Tate was constantly trying to exploit algorithms across multiple platforms to expand his reach. But TikTok proved especially fertile ground because the platform’s recommendation algorithm is particularly effective at linking previously isolated groups and surfacing comparable content. In other words, TikTok gave the paid “students” of Hustler’s University access to new and larger populations previously out of reach of Tate’s particular brand of misogyny.
In PEN America’s work combatting online abuse, we have seen again and again how misogynistic rhetoric, like Tate’s, is deployed through a host of manipulative tactics to intimidate and silence its targets—especially women and LGBTQ people.
By September 2022, within just a few months of Tate’s newfound popularity, teachers reported seeing the effects among their students. Multiple teachers across the country (and globally) expressed their alarm. To give just one example, a high school teacher posted on the r/Teachers subreddit about her class of freshman boys: “Not only are they making these misogynistic claims in class but are literally refusing to do assignments if it’s sourced from a woman,” she wrote. “I had three boys refuse to read an article by a female author because ‘women should only be housewives.'”
As a free expression organization, PEN America defends Tate’s right to express his ideas, even those we may find abhorrent. The First Amendment protects Tate’s right to express his ugly ideology, but it also protects the right of private companies to set and enforce policies that govern behavior on their platforms. If platforms really want to effectively address the harm caused by Tate and other actors like him, they need to recognize the disinformation tactics that actors like Tate rely on to spread hate and harassment and then leverage strategies proven to counteract disinformation.
It is widely accepted that correcting disinformation after it has spread is not very effective. A 2022 ScienceAdvances study proceeded from the understanding that corrective fact checking efforts are unlikely to reach as large an audience or similar audiences as those initially targeted, and that after consuming disinformation, these audiences are also unlikely to believe the fact checks that follow. Rather than using de-bunking as a corrective measure for disinformation, these researchers attempted to “pre-bunk,” or teach audiences to recognize and question manipulative language before they internalize it. Study participants who watched pre-bunking videos as advertisements before interacting with bad information were 5 to 10 percent more likely to detect misinformation.
If platforms adopted pre-bunking methods, they would be more effective at countering disinformation tactics that operate through emotional language, scapegoating, fear-mongering, diversion, and causal fallacies, including false dichotomies and ad hominem attacks. Pre-bunking tactics can empower audiences to identify when they’re targets of manipulation. Pre-bunking misogynistic rhetoric could be as simple as videos that explain what consent is, how to be a good bystander online, why misogyny spreads, and how dissemination tactics work.
Tate isn’t the first influencer to spread misogynistic vitriol, and he won’t be the last. Bite-sized videos, of the kind popularized by TikTok, will eventually be surpassed by a new content trend, one that is disseminated even more rapidly and widely. Reactive efforts, no matter how quickly they’re adapted to the latest trend, are undermined by evolving policy evasion strategies and increasingly sophisticated dissemination tactics. Rather than treating the latest tactic for spreading misogyny and hate as a new phenomenon, we need to be ready for their infinite materializations by preparing audiences for their arrival.
Aashna Agarwal is a program assistant on PEN America’s Digital Safety team. To learn more about media literacy and how you can support your community, you can access PEN America’s Media Literacy Toolkit here and Field Manual against Online Harassment here.
The views expressed in this article are the writer’s own.