skip to Main Content

While reporting online harassment to platforms can be cumbersome—and may sometimes feel futile—it’s an important step and can yield helpful results.

Different social media logos

Whether the abuse is directed at you or someone else, reporting it creates a trail of documentation for the tech platform and can result in important consequences for an online abuser, including the removal of harmful content or even the deactivation of an abuser’s online account.

Unfortunately, platforms are not always responsive or helpful in the ways we would like, and many have a checkered history of being transparent about and/or enforcing their own community standards. In any case, prepare yourself for the possibility that reporting online abuse may or may not result in a helpful outcome. Familiarize yourself with a platform’s community standards. Consider enlisting members of your support community to share in the labor of monitoring harassment and helping report with and for you.

What to Know Before Reporting Online Abuse

Reporting content to digital platforms generally requires a user to describe the incident and type of threat that has occurred, whether it’s sexual, exploitative, violent, physically threatening, etc. Some platforms, like Twitter and Facebook, have “flagging” options built directly into the interface (typically in the upper right-hand corner of a post), giving you the option to report content the moment you see it. In other cases, users may be asked to provide a screenshot or link to the harmful content, which is why it’s important to document your abuse.

Many platform reporting mechanisms just ask you to choose among a predetermined set of options. But if you actually have an opportunity to add text or context, clarity and precision about your experience of harassment are crucial when reporting abuse. Because most of these companies receive thousands of complaints every day, using clear and straightforward language, and noting the particular community standard that’s been violated, can go a long way toward ensuring your complaint is taken seriously.

Platform by Platform

As tech companies’ community standards and reporting guidelines evolve, we’ll do our best to update the information below.

Twitter

Quick Link: Help Center

Community Guidelines: You can access all of Twitter’s policies relevant to online behavior at Twitter’s Rules and Policies page.

Reporting Mechanisms: You can report an account, a tweet, a DM, a list, or a conversation. You should receive an emailed copy of your report. Twitter now offers a range of enforcement options. Users can:

Facebook

Quick Link: How to Report

Community Guidelines: Facebook’s Community Standards describe how the platform responds to threats, bullying, violent content, and exploitation—with one caveat: “Sometimes we will allow content if newsworthy, significant, or important to the public interest—even if it might otherwise violate our standards.” (Twitter has a similar policy.)

Reporting Mechanisms: You can report profiles, newsfeed posts, posts on your profile, photos, videos, DMs, pages, groups, ads, events, and comments. The fastest way for a user to report online abuse is to click on “Report” in the top right-hand corner of a Facebook post. Additional resources include:

Instagram

Quick Link: Help Center

Community Guidelines: Instagram lists comprehensive Community Guidelines and offers advice on dispute engagement and resolution, suggesting users to turn to family and friends for support and advice.

Reporting Mechanisms: You can report abusive messages, posts, comments, and accounts. Instagram states that it will review reported content and remove anything deemed to contain credible threats or hate speech. Writers and journalists who use Instagram for professional purposes should take note of Instagram’s policy that it allows for “stronger conversation” around users who are often featured in the news or are in the public eye due to their profession. Instagram users have the option to:

TikTok

Quick Link: Help Center

Community Guidelines: TikTok lists comprehensive Community Guidelines and defines hateful and abusive behavior, hateful ideology, and sexual harassment, doxing, hacking, blackmail.

Reporting Mechanisms: You can report abusive messages, posts, comments, and accounts. TikTok states that it is committed to maintaining a safe, positive, and friendly community. TikTok users have the option to report comments individually or in bulk.

Safety Center: The Safety Center offers Tik Tok users well-being guides, sexual assault resources, and information on eating disorders and online challenges.

YouTube

Quick Link: Policies and Safety

Community Guidelines: YouTube’s Community Guidelines warn against posting videos containing content that is hateful, sexual, violent, graphic, dangerous, or threatening.

Reporting Mechanisms: You can report a video, playlist, thumbnail, comment, live chat message, or a channel. Screenwriters, spoken-word poets, or other YouTube-friendly writers who find themselves targeted by hateful content or commentary have a few options for dealing with such abuse:

  • Flag content that violates community standards (which can result in a strike against the posted material, giving the original content poster time to review and contest the content removal)
  • Report an abusive user via YouTube’s reporting tool

YouTube also offers detailed information on reporting videos as well as safety tools and resources for teens and parents, privacy settings, and self-injury on its Policies and Safety page.

WhatsApp

Quick link: Security and Privacy

Community Guidelines: WhatsApp’s Terms of Service prohibit some activities, such as: submitting content (in the status, profile photos or messages) that’s illegal, obscene, defamatory, threatening, intimidating, harassing, hateful, racially, or ethnically offensive, or instigates or encourages conduct that would be illegal, or otherwise inappropriate.”

Reporting Mechanisms: On WhatsApp, you can “Report”, “Report and Block” an individual account, or “Report and Exit” a group. If you “Report” an abuser, they can still send you texts, messages, or voice notes. If you “Report and Block,” your chats with the abuser will be deleted. You might want to take a screenshot before reporting and blocking in order to document your harassment. When you report abusive content, WhatsApp advises users to “provide as much information as possible.”

Blogger

Quick link: Content Policy

Community Guidelines: Blogger is straightforward about being a platform that champions free speech. It claims not to monitor content, mediate disputes, or remove blogs containing insults or negative commentary, but its Content Boundaries state that it will consider the removal of blogs that pose threats to or promote violence against individuals or groups based on their “core characteristics.”

Reporting Mechanisms: Blogger encourages users to directly contact other users posting content that they find offensive, if said user’s contact information is listed on their blog. If such efforts are unsuccessful, users can:

Medium

Quick link: Report Posts and Users

Community Guidelines: Member Content Guidelines and Medium Rules cover a range of behaviors Medium does not allow.

Reporting Mechanisms: Though Medium doesn’t vet or approve posts before they are published, the platform states that it doesn’t tolerate bullying, doxing, or harassment. Medium’s rules, which are tracked on GitHub as they evolve, are meant to promote fair engagement between users. Users can submit a complaint electronically to Medium requesting further review.

WordPress

Quick Link: Report a Site

Community Guidelines: The site’s guidelines for best use can be found here.

Reporting Mechanisms: WordPress allows users to report content with which they don’t agree by submitting an online form describing the content as either spam, mature/abusive/violent content, copyright infringement, or suggestive of self-harm—as long as these sites are hosted by WordPress. (Those sites “powered” by WordPress.org, for example, don’t fall under this category.)

Amazon

Quick Link: Online submission form

Community Guidelines: Community Guidelines

Reporting Mechanisms: For writers who self-publish to Amazon or depend on the platform for constructive reviews of their work, Amazon’s “customer review” sections pose a particular challenge. Numerous writers surveyed by PEN America reported encountering hateful online trolls in customer reviews and felt that the platform was unresponsive when it came to removing such abusive commentary. Harassed Amazon users have the option to report online harassment via:

  • The “Report Abuse” tab located in the lower right-hand corner of customer reviews
  • Amazon’s online submission form, where users can report incidents that have violated Amazon’s community guidelines