Skip to content

While reporting online harassment to platforms can be cumbersome—and may sometimes feel futile—it’s an important step and can yield helpful results.

Different social media logos

Whether directed at you or someone else, reporting abuse creates a trail of documentation for the tech platform and can result in important consequences for an online abuser, including the removal of harmful content or even the deactivation of an abuser’s online account.

Unfortunately, platforms are not always responsive or helpful in the ways we would like, and many have a checkered history of being transparent about and/or enforcing their own community standards. In any case, prepare yourself for the possibility that reporting online abuse may or may not result in a helpful outcome. Familiarize yourself with a platform’s community standards. Consider enlisting members of your support community to share in the labor of monitoring harassment and helping report with and for you.

What to Know Before Reporting Online Abuse

Reporting content to digital platforms generally requires a user to describe the incident and type of threat that has occurred, whether it’s sexual, exploitative, violent, physically threatening, etc. Some platforms, like Twitter and Facebook, have “flagging” options built directly into the interface (typically in the upper right corner of a post), giving you the option to report content the moment you see it. In other cases, users may be asked to provide a screenshot or link to the harmful content, which is why it’s important to document your abuse.

Many platform reporting mechanisms just ask you to choose from a given set of options. But if you actually have an opportunity to add text or context, details about your experience of harassment are crucial when reporting abuse. Because most of these companies receive thousands of complaints every day, using clear language, and noting the particular community standard that’s been violated, can go a long way toward ensuring your complaint is taken seriously.

Platform by Platform

As tech companies’ community standards and reporting guidelines evolve, we’ll do our best to update the information below.


Quick Link: Help Center

Community Guidelines: You can access all of Twitter’s policies relevant to online behavior at Twitter’s Rules and Policies page.

Reporting Mechanisms: You can report an account, a tweet, a DM, a list, or a conversation. You should receive an emailed copy of your report. Twitter now offers a range of enforcement options. Users can:


Quick Link: How to Report

Community Guidelines: Facebook’s Community Standards describe how the platform responds to threats, bullying, violent content, and exploitation—with one exception: “Sometimes we will allow content if newsworthy, significant, or important to the public interest—even if it might otherwise violate our standards.” (Twitter has a similar policy.)

Reporting Mechanisms: You can report profiles, newsfeed posts, posts on your profile, photos, videos, DMs, pages, groups, ads, events, and comments. The fastest way for a user to report online abuse is to click on “Report” in the top right-hand corner of a Facebook post. Additional resources include:


Quick Link: Help Center

Community Guidelines: Instagram lists comprehensive Community Guidelines and offers advice on dispute engagement and resolution, suggesting users to turn to family and friends for support and advice.

Reporting Mechanisms: You can report abusive messages, posts, comments, and accounts. Instagram states that it will review reported content and remove anything deemed to contain credible threats or hate speech. Writers and journalists who use Instagram for professional purposes should take note of Instagram’s policy that it allows for “stronger conversation” around users who are often featured in the news or are in the public eye due to their profession. Instagram users have the option to:


Quick Link: Help Center

Community Guidelines: TikTok lists comprehensive Community Guidelines and defines hateful and abusive behavior, hateful ideology, and sexual harassment, doxing, hacking, blackmail.

Reporting Mechanisms: You can report abusive messages, posts, comments, and accounts. TikTok states that it is committed to maintaining a safe, positive, and friendly community. TikTok users have the option to report comments individually or in bulk.

Safety Center: The Safety Center offers Tik Tok users well-being guides, sexual assault resources, and information on eating disorders and online challenges.


Quick Link: Policies and Safety

Community Guidelines: YouTube’s Community Guidelines warn against posting videos containing content that is hateful, sexual, violent, graphic, dangerous, or threatening.

Reporting Mechanisms: You can report a video, playlist, thumbnail, comment, live chat message, or a channel. Screenwriters, spoken-word poets, or other YouTube-friendly writers who find themselves targeted by hateful content or commentary have a few options for dealing with such abuse:

  • Report content that violates community standards (which can result in a strike against the posted material, giving the original content poster time to review and contest the content removal)
  • Report an abusive user via YouTube’s reporting tool

YouTube also offers detailed information on reporting videos as well as safety tools and resources for teens and parents, privacy settings, and self-injury on its Policies and Safety page.


Quick link: Security and Privacy

Community Guidelines: WhatsApp’s Terms of Service prohibit some activities, such as: submitting content (in the status, profile photos or messages) that’s illegal, obscene, defamatory, threatening, intimidating, harassing, hateful, racially, or ethnically offensive, or instigates or encourages conduct that would be illegal, or otherwise inappropriate.”

Reporting Mechanisms: On WhatsApp, you can “Report”, “Report and Block” an individual account, or “Report and Exit” a group. If you “Report” an abuser, they can still send you texts, messages, or voice notes. If you “Report and Block,” your chats with the abuser will be deleted. You might want to take a screenshot before reporting and blocking in order to document your harassment. When you report abusive content, WhatsApp advises users to “provide as much information as possible.”


Quick link: Report Abuse

Community Guidelines: Snapchat’s Community Guidelines prohibit bullying and harassment of any kind. It is a violation of guidelines to share another person’s private information and/or Snaps of people in private spaces without their knowledge and consent. Hate speech—including content that demeans, defames or promotes discrimination or violence on the basis of race, caste, ethnicity, national origin, religion, sexual orientation, gender identity, disability or veteran status, immigration status, socio-economic status, age, weight or pregnancy status—is prohibited.

Reporting Mechanisms: You can report abuse on Snapchat, including harassment, bullying or other safety concerns. In the United States, Snapchat has partnered with the Crisis Text Line to provide additional support and resources to Snapchatters by texting the word “KIND” to 741741 to chat with a trained crisis counselor; this service is free and available 24/7.


Quick link: Report inappropriate content, message, or safety concerns

Community Guidelines: LinkedIn’s Professional Community Policies don’t “allow bullying or harassment. This includes abusive language, revealing others’ personal or sensitive information (aka “doxing”), or inciting or engaging others to do any of the same.” LinkedIn’s Transparency Center offers guidance on identifying abuse and how to better detect and report spam, phishing, scams, and fake, inaccurate, or misleading programs.

Reporting Mechanisms: When you report another LinkedIn member’s content, they won’t be notified of who reported them. After reporting them, you will no longer see the content or conversation that you reported in your feed or messaging inbox. LinkedIn “may review the reported content or conversation to take additional measures like warning or suspending the author if the content is in violation of [their] Terms of Service. In some cases, you’ll receive more information on the outcome via email. [You can also] manage the updates you receive about your reported content from your Settings.”


Quick link: Submit a request.

Community Guidelines: Signal Terms of Service and Privacy Policy

Reporting Mechanisms: Signal has developed new features to deal with unwanted messages. For all conversations initiated by senders outside your contacts, profile photos are now blurred, until you explicitly tap on the photo. Signal also introduced message requests so you can quickly see more contextual information before accepting, deleting, or blocking messages from someone who isn’t in your contacts. It’s therefore possible to block and report spam in one click.

When clicking “Report Spam and Block,” your device will send both the sender’s phone number and a message ID to Signal’s servers. If the same phone number gets reported multiple times or shows signs of being used as a platform for automated messages, Signal will ask for “proof of humanity” that will block additional messages from being sent until a challenge (e.g., a CAPTCHA) is completed.


Quick link: How to File a User Report

If you come across a Twitch broadcaster or user who you believe has violated Twitch’s Terms of Service or Community Guidelines, you can file a report.


Quick link: Report Notes and Profiles

Community Guidelines: Terms of Use

Reporting Mechanisms: When reporting Notes on Substack, users are given the option to explain why they are reporting the content. When reporting Profiles on Substack, users must select from the following reasons for reporting: spam, impersonation, and content violation.


Quick link: Report Posts and Users

Community Guidelines: Member Content Guidelines and Medium Rules cover a range of behaviors Medium does not allow.

Reporting Mechanisms: Though Medium doesn’t vet or approve posts before they are published, the platform states that it doesn’t tolerate bullying, doxing, or harassment. Medium’s rules, which are tracked on GitHub as they evolve, are meant to promote fair engagement between users. Users can submit a complaint electronically to Medium requesting further review.


Quick link: Reporting Profiles, Comments, Reviews, and Groups

Community Guidelines: Community Guidelines, Group Moderator Guidelines, Review Guidelines, and Author Guidelines, among others, cover the various behaviors that Goodreads does not allow, and account for the many account types and uses the site offers. See their full Terms of Use here.

Reporting Mechanisms: You can report User Profiles, Author Profiles, Comments, Reviews, Community Questions and Answers, Groups, Messages, Photos, and Status Updates. You can also file a report if you wish to provide more context. Goodreads asks that you include a URL, detailed description, and screenshot along with your submission.


Quick Link: Report a Site

Community Guidelines: The site’s guidelines for best use can be found here.

Reporting Mechanisms: WordPress allows users to report content with which they don’t agree by submitting an online form describing the content as either spam, mature/abusive/violent content, copyright infringement, or suggestive of self-harm—as long as these sites are hosted by WordPress. (Those sites “powered” by, for example, don’t fall under this category.)


Quick Link: Online submission form

Community Guidelines: Community Guidelines

Reporting Mechanisms: For writers who self-publish to Amazon or depend on the platform for constructive reviews of their work, Amazon’s “customer review” sections pose a particular challenge. Numerous writers surveyed by PEN America reported encountering hateful online trolls in customer reviews and felt that the platform was unresponsive when it came to removing such abusive commentary. Harassed Amazon users have the option to report online harassment via:

  • The “Report Abuse” tab located in the lower right-hand corner of customer reviews
  • Amazon’s online submission form, where users can report incidents that have violated Amazon’s community guidelines

Do you want to report on a platform not mentioned here?

Check ADL’s page for information about how to report on platforms like Activision/Blizzard, Discord, Nextdoor, Reddit, and more.