skip to Main Content

Reporting online harassment to platforms often feels futile or labor-intensive, but it can also yield helpful results.

According to the Pew Research Center, 66 percent of people who have experienced online harassment experienced their most recent incident on a social networking site. Whether the abuse is directed at you or someone else, reporting it creates a trail of documentation for the tech platform and can result in important consequences for an online abuser, including the removal of harmful content or even the deactivation of an abuser’s online account.

Unfortunately, platforms are not always responsive or helpful in the ways we would like, and many have a checkered history of being transparent about and/or enforcing their own community standards. In any case, prepare yourself for the possibility that reporting online abuse may or may not result in a helpful outcome. Familiarize yourself with a platform’s community standards, and consider enlisting members of your support community to share in the labor of monitoring and reporting harassment.

What to Know Before Reporting Online Abuse

Reporting content to digital platforms generally requires a user to describe the incident and type of threat that has occurred, whether it’s sexual, exploitative, violent, physically threatening, etc. Some platforms, like Twitter and Facebook, have “flagging” options built directly into the interface (typically in the upper right-hand corner of a post), giving you the option to report content the moment you see it. In other cases, users may be asked to provide a screenshot or link to the harmful content, which is why it’s important to document your abuse.

As a first step, many platforms will encourage users to unfollow, block, or mute the users they find offensive. Some platforms, like Instagram, even encourage communication between users in the hopes that conflict can be solved through peaceable dialogue without the input of the platform. (Given the inflammatory nature of anonymous trolling, this is not always the best option.)

Clarity and precision about your experience of harassment are crucial when reporting abuse. Because most of these companies receive thousands of complaints every day, using clear and straightforward language, and noting the particular community standard that’s been violated, can go a long way toward ensuring your complaint is taken seriously.

Platform-Specific Information for Reporting Online Harassment

As tech companies’ community standards and reporting guidelines evolve, we’ll do our best to update the information below.

Twitter

Quick Link: Help Center

Community Guidelines: You can access all of Twitter’s policies relevant to online behavior at Twitter’s Rules and Policies page.

Reporting Mechanisms: Twitter now offers a range of enforcement options authorizing the company to take action against specific tweets, direct messages (DMs), and accounts. Users can:

Facebook

Quick Link: How to Report

Community Guidelines: Facebook’s Community Standards describe how the platform responds to threats, bullying, violent content, and exploitation—with one caveat: “Sometimes we will allow content if newsworthy, significant, or important to the public interest—even if it might otherwise violate our standards.” (Twitter has a similar policy.)

Reporting Mechanisms: The fastest way for a user to report online abuse is to click on “Report” in the top right-hand corner of a Facebook post. Additional resources include:

Blogger

Quick link: Content Policy

Community Guidelines: Blogger is straightforward about being a platform that champions free speech. It claims not to monitor content, mediate disputes, or remove blogs containing insults or negative commentary, but its Content Boundaries state that it will consider the removal of blogs that pose threats to or promote violence against individual or groups based on their “core characteristics.”

Reporting Mechanisms: Blogger encourages users to directly contact other users posting content that they find offensive, if said user’s contact information is listed on their blog. If such efforts are unsuccessful, users can:

Medium

Quick link: Report Posts and Users

Community Guidelines: Member Content Guidelines and Medium Rules cover a range of behaviors Medium does not allow.

Reporting Mechanisms: Though Medium doesn’t vet or approve posts before they are published, the platform states that it doesn’t tolerate bullying, doxing, or harassment. Medium’s rules, which are tracked on GitHub as they evolve, are meant to promote fair engagement between users. Users can:

WordPress

Quick Link: Support Center

Community Guidelines: The site’s guidelines for best use can be found here.

Reporting Mechanisms: WordPress allows users to report content with which they don’t agree by submitting an online form describing the content as either spam, mature/abusive/violent content, copyright infringement, or suggestive of self-harm—as long as these sites are hosted by WordPress. (Those sites “powered” by WordPress.org, for example, don’t fall under this category.)

Amazon

Quick Link: Online submission form

Community Guidelines: Community Guidelines

Reporting Mechanisms: For writers who self-publish to Amazon or depend on the platform for constructive reviews of their work, Amazon’s “customer review” sections pose a particular challenge. Numerous writers surveyed by PEN America reported encountering hateful online trolls in customer reviews and felt that the platform was unresponsive when it came to removing such abusive commentary. Harassed Amazon users have the option to report online harassment via:

  • The “Report Abuse” tab located in the lower right-hand corner of customer reviews
  • Amazon’s online submission form, where users can report incidents that have violated Amazon’s community guidelines

Instagram

Quick Link: Help Center

Community Guidelines: Instagram lists comprehensive Community Guidelines and offers advice on dispute engagement and resolution, suggesting users to turn to family and friends for support and advice.

Reporting Mechanisms: Instagram states that it will review reported content and remove anything deemed to contain credible threats or hate speech. Writers and journalists who use Instagram for professional purposes should take note of Instagram’s policy that it allows for “stronger conversation” around users who are often featured in the news or are in the public eye due to their profession. Instagram users have the option to:

YouTube

Quick Link: Policies and Safety

Community Guidelines: YouTube’s Community Guidelines warn against posting videos containing content that is hateful, sexual, violent, graphic, dangerous, or threatening.

Reporting Mechanisms: Screenwriters, spoken-word poets, or other YouTube-friendly writers who find themselves targeted by hateful content or commentary have a few options for dealing with such abuse:

  • Flag content that violates community standards (which can result in a strike against the posted material, giving the original content poster time to review and contest the content removal)
  • Report abusive content via YouTube’s reporting tool

YouTube also offers detailed information on reporting videos as well as safety tools and resources for teens and parents, privacy settings, and self-injury on its Policies and Safety page.