Why some people are driven to commit acts of online hate and harassment is a complex question with a multitude of possible answers, many of them dissatisfying.
Despite heightened awareness of the issue and efforts by tech companies like Twitter to roll out new anti-harassment tools, online harassment is not going away: According to the Pew Research Center, online harassment is on the rise, having increased and intensified in the past five years alone.
While many forms of online hate and harassment are no doubt a reflection of the structural inequalities embedded in our culture and communities, hateful rhetoric and/or discriminatory attitudes that are more likely to be denounced in public settings become amplified and nearly impossible to tamp down in online settings. This makes many corners of the internet hostile and inhospitable, especially for women; those from racial, ethnic, and religious minorities; members of the LGBTQ+ community; and people with disabilities. Other times, online harassers will say and do wildly offensive things with the sole intention of getting a rise out of their target, not because they have a particular ideological agenda they’re trying to impart. In either case, and regardless of a harasser’s intentions, the impact of online hate and harassment on an individual target can be devastating.
For some targets of online harassment, however, trying to understand a harasser’s intentions and contextualize the abuse can be a useful way to process all this online toxicity. A burgeoning faction of researchers and journalists have begun to explore when, why, and how people hate and harass on the internet, upending prior notions that online harassment is the sole purview of bullies and sadists. They’ve learned that how and when people harass online can be influenced by a person’s mood, personal circumstances, sense of self-worth, and politics, and even what time of day it is.
There’s no one-size-fits-all diagnosis for what drives online hate, but there are theories, anecdotes, and reportage that we hope will continue to drive bigger and better research. The more data that is collected on this subject, the more tech companies, newsrooms, and internet users can come together to help eradicate hate in our online communities.
PEN America’s goal in providing the following information is to help assuage feelings of confusion, guilt, or shame that targets of online harassment frequently report feeling during episodes of abuse. Not everyone will benefit from the information provided below. If any of these resources appear triggering to you, or if you think it might just be too painful to try to understand an online harasser, then it’s probably best to save this resource for another day.
What Drives People to Commit Acts of Online Harassment?
New studies and an influx of online harassment–related journalism are beginning to dispel the stereotype that online harassers are a minority population of anonymous misanthropes who take pleasure in other people’s pain. Certainly some are (and some have even admitted to this fact on public radio), but others are people we know—active members of our communities who use online harassment as an unfortunate form of stress release or a way to deflect their own feelings of self-loathing. “Trolls are portrayed as aberrational and antithetical to how normal people converse with each other. And that could not be further from the truth,” says Whitney Phillips, author of This Is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture. “These are mostly normal people who do things that seem fun at the time that have huge implications. You want to say this is the bad guys, but it’s a problem of us.”
A number of external factors have also been found to influence a person’s likelihood of committing online harassment, meaning that while online harassers may be harder to categorize, there are now ways we can begin to predict when and where harmful online behaviors will occur. The more people and institutions can leverage this knowledge, the more we can redirect online discourse into more positive, productive territory.
Below we’ve listed a number of theories and factors that might drive someone to commit online harassment. This is by no means meant to be a list of excuses for abusive online behaviors. Rather, this is a list of possibilities that may resonate with certain individuals who are curious to learn more about why some people harass others online.
This may be the most obvious reason why people commit acts of online hate and harassment: They’re perpetuating biases that already influence how they see the world and their place in it. Communities that are more susceptible to being targeted by discriminatory policies and social attitudes offline are also more susceptible to being targeted for their identity online. In a 2021 study, Pew found that certain groups are more likely than others to experience trait-based harassment: 54 percent of black people and 47 percent of Hispanic people report being targeted for their race or ethnicity during episodes of online harassment, as compared to 17 percent of white people. Women, meanwhile, are almost twice as likely as men to report being targeted for their gender during online harassment. Pew also reports that among adults aged 18 to 29, women are more than twice as likely as men to be the targets of sexual harassment online. The fact is, the internet not only reflects social ills, it amplifies them with impunity. Battling hate in our communities means tackling it online, which requires a concerted effort by tech companies, newsrooms, and internet users alike.
The “online disinhibition effect” is a term that was coined to describe the breakdown, in online settings, of social mores and inhibitions that are normally present during in-person interactions. Because so many online platforms offer anonymity and a feeling of invisibility, as well as generally lacking an authority figure, internet users can more easily adopt dissociative behaviors that allow them to evade empathy and misbehave without consequence—and also without having to see the direct impact their actions have on another person’s life. By remaining anonymous, people are more likely to act on antisocial or harmful behaviors they would otherwise avoid in real life. Because some people have a tendency to see online communities as less “real” than the communities in which they physically take part, the internet becomes a make-believe space where some feel empowered to adopt personas, or characters, that are shed the moment they log offline.
Perversely, participating in online hate can offer a sense of community to people who might not be officially affiliated with—or know how to become affiliated with—an organized hate group. For people who have hateful tendencies toward other individuals or groups, or for people who don’t necessarily hate specific groups but feel angry and alone, online hate speech and harassment offer a proxy form of community, which, sadly, is one of the reasons people join hate groups in the first place.
Maybe this one feels obvious, but in a number of instances in which targets of harassment have confronted their abusers in published articles, radio programs, and podcasts, online abusers have revealed feelings of inadequacy, low self-esteem, and jealousy over what they perceive to be their victims’ self-confidence. For example, when podcaster Dylan Marron confronted a homophobic online troll on his podcast Conversations with People Who Hate Me, the young man opened up about his own experiences being bullied at school. When writer Lindy West confronted an online harasser who had impersonated her dead father, the man offered an apology and a reason for his hate: “I think my anger towards you stems from your happiness with your own being,” he told her. “It offended me because it served to highlight my unhappiness with my own self.”
In many ways related to the online disinhibition effect, “lulz” is a form of internet slang derived from the online abbreviations “lol” and “lolz” (“laugh out loud”) and is used to denote laughter at another’s expense. Some online harassers will report that they “did it for the lulz,” meaning they took pleasure in causing another person pain or discomfort online. Notorious neo-Nazi and online troller Andrew Auernheimer, The Daily Stormer webmaster known as “weev,” offers insight into this attitude, claiming that he sees offensive internet behavior as a “political act” that undermines polite society. He defines “lulz” as “the joy that you get in your heart from seeing people suffer ironic punishments.” Unfortunately, this sadistic attitude—responsible for the spread of hateful, racist ideas that have a decidedly unironic and harmful impact on society—is cited by other online harassers as a reason behind their abusive online behavior.
Like those “doing it for the lulz,” some internet harassers report doing it for the attention. As one confessed (and repentant) internet troll writes, “All a troll wants is [for] you to turn the spotlight onto them. They want you to repost their comment to your followers. They want you to write a blog post or status about them. They will use anything and everything to get it.”
External factors that can influence the likelihood of online harassment
Context and precedent in comments sections
A study by researchers from Stanford and Cornell Universities suggests that people are more inclined to post hateful comments after seeing negative comments posted by others. If the precedent set in a comment thread or message board is civil and constructive expressions of fact and opinion, then the ensuing conversation is more likely to be civil and constructive as well. People are also less likely to post inappropriate comments if the online platform posts visible rules of engagement at the top of the conversation thread.
Time of day
The same study cited above suggests that people are more likely to post offensive comments late at night and at the beginning of the week, which is also when people are most likely to be in negative moods. Writers who find themselves consistently targeted by online harassment might find it useful to disengage from social media late at night and/or to avoid joining conversations that have already descended into negativity and hate.
According to Stanford professor Jure Leskovec, negativity breeds negativity: “Just one person waking up cranky can create a spark and, because of discussion context and voting, these sparks can spiral out into cascades of bad behavior.” Negativity can also carry from one conversation to the next, so if an online harasser has engaged in abusive online behaviors in one setting, he or she is more likely to carry these behaviors into the following online conversation. Researchers have also found that a person’s mood can be impacted by a variety of external factors, including reduced satisfaction with one’s life and exposure to unpleasant conditions (i.e., high temperatures, secondhand smoke), which can then impact the likelihood of someone committing acts of online harassment.
The Case for Empathy
While this approach is not for everyone, many targets of online harassment have reported benefitting from attempts to empathize with their online harasser. Sarah Silverman’s troll apologized after she expressed concern for his back pain. Psychologist and neuroscientist Lisa Feldman Barrett, who experienced a particularly painful episode of online harassment, also recommends the empathy approach.