Skip to content

When journalists, activists and writers are attacked online for their work, their employers—including newsrooms and publishers—have a responsibility to take the abuse seriously and help address it. Below we share best practices for employers to protect and support employees facing online abuse.

Writers, journalists, and activists rely on the internet and social media to develop, publish, and promote their work, which also benefits their editors, publishers, and newsrooms. Being active and visible in online spaces, where so much of public discourse now takes place, exposes content creators—particularly those from marginalized communities and women—to online abuse.  Those targeted by online abuse often suffer in isolation, partly because there’s still a great deal of stigma and shame associated with harassment, online or off. Many people who are disproportionately attacked online have also been marginalized in other spaces, so they may have legitimate concerns about being dismissed or punished. Employers have a responsibility to create a culture where employees feel valued, protected, and supported. The good news is that there are many steps employers can take to support their teams in preparing for, responding to, and mitigating the damage of online abuse.

Below we share best practices for employers to protect and support employees facing online abuse. Understanding that the support you can offer depends on your role, we break down advice for managers to support their reports and separately for those in leadership roles. We also include information on protocols and policies that employers can adopt.

Jump to: 5 Steps for Managers | 10 Steps for Leadership | Protocols and Policies

See PEN America’s article on Harvard Business Review: What to Do When Your Employee Is Harassed Online? 

5 Steps for Managers

1. Reach out and listen

Proactively invite employees to discuss online abuse, check in, and listen closely to their needs. Keep in mind that some individuals—depending on their identity or life experience—may not feel comfortable calling attention to their situation for fear of retaliation or increased scrutiny, so be discreet. It is critical to ensure that employees facing online abuse are engaged in every decision that could affect them, particularly in regards to public disclosure and interactions with law enforcement.

  • Private check-ins: if you witness or hear about an employee experiencing online abuse, reach out privately to schedule a check in, leaving the door open for the targeted employee to invite a trusted colleague or HR representative.
  • “Health checks” during meetings: create space during cross-departmental or team meetings for employees to raise concerns or share experiences of online abuse. For newsrooms, the International Press Institute (IPI) recommends that managers raise online harassment in editorial meetings.

2. Assess the risk

Work closely with targeted staff to gauge threats to physical safety (for themselves, their family, and other staff) and risks for the organization. It may be necessary to consult in-house security, bring on external security, engage law enforcement, and/or provide temporary relocation. Again, always involve impacted staff in every decision being made in regards to their security. IPI offers detailed guidance on risk assessment for newsrooms.

3. Document and delegate

Offer targeted employees a temporary respite from seeing and dealing with online abuse by asking the social media team or a trusted colleague to offer support with monitoring, documenting, reporting, blocking, and muting. Some social media platforms enable certain forms of “delegated” access:

4. Share policies, protocols, and resources

Make sure employees are aware of and can easily access policies, protocols, and resources related to online abuse, digital safety, and social media. For newsrooms, IPI suggests creating a designated section focused on the aforementioned subjects within an organizational intranet or staff handbook.

5. Escalate

From social media to email and messaging apps, most digital platforms have mechanisms to report online abuse. But sometimes these mechanisms fail. As an individual, it can be difficult to get a platform’s attention, but organizations often have direct contacts at tech companies. If an employee has reported abusive content that clearly violates a platform’s terms of service and is nevertheless unable to get this content removed, escalating the issue directly with tech company contacts can make all the difference. Consider also reaching out to a digital security helpline from organizations such as SMEX and Accessnow to get in touch with a digital security expert who may also have contacts in big tech.

10 Steps for Leadership

1. Assess the scope

Survey employees to figure out: how often they are experiencing online abuse and on which platforms; what kinds of tactics they’re being subjected to; the emotional, psychological, and professional toll; and how the organization can offer support. You may be surprised to learn just how many employees are impacted by online harassment.

PEN America has developed sample questions, which organizations can use or adapt for an informal and anonymous survey. You may be surprised to learn just how many employees are impacted by online harassment.

2. Acknowledge the harm

Leadership needs to clearly communicate that they take the issue of online abuse seriously and expect managers and colleagues to do the same. This is crucial for creating an environment in which employees feel safe and supported enough to come forward when they are being harassed online. This commitment can be communicated via all-staff emails and meetings, acted upon by implementing some of the steps outlined here, and reinforced by the ways in which managers and HR react to individual cases.

3. Create a task force

Bring together a task force of staff with diverse backgrounds, experiences, and skills to distribute the burden and responsibility of addressing online abuse. Putting in place rigorous protections and a robust response requires multiple skill sets and areas of expertise; it can also be time-consuming and emotionally demanding. Ideally the task force will include representatives from: senior leadership, security, IT, social media/audience engagement, and editorial. IPI provides more detailed guidance for newsrooms.

4. Update protocols and policies

When targeted by online abuse, employees sometimes have no idea who to turn to or what to do. Having clear protocols and policies—on how to navigate abuse, how to tighten digital security, and how to use social media professionally—can make employees feel safer and more empowered. Employers need to fold these policies and protocols into onboarding and employee handbooks, post them on intranets and Slack, and encourage managers, HR, IT, and social media staff to reinforce them. See below for an overview of relevant policies and protocols, with examples.

5. Develop a reporting mechanism

Create an internal reporting mechanism so employees can safely and privately report online abuse. The aforementioned task force can clarify what kinds of abuse employees should report, regularly monitor the reporting mechanism, and ensure prompt follow up with resources and support. A reporting mechanism can help employers identify patterns in abuse (multiple staff might be dealing with the same stalker) and assess threats (distinguishing between someone being a jerk vs. an abuser with a history of violence). It also gives employees a place to turn if they’re hesitant to speak to their manager. A reporting mechanism can be as simple as a designated email account or Slack channel, or as sophisticated as a Google form that automatically populates a database of incidents—IPI offers more detailed guidance for newsrooms. Whatever reporting mechanism you decide to use, make sure you are taking steps to secure your online accounts by following instructions in the Prepare section of our Manual.

6. Offer training

Although most employees rely on digital tools for work (email, messaging, search engines, social media), few receive adequate training on how to do so safely and professionally. Empower staff and freelancers with the knowledge that there are concrete steps they can take to protect themselves and respond when under attack. Training topics can include: digital security (including anti-hacking and anti-doxing strategies), online abuse self-defense, and bystander intervention. For more information about PEN America’s training program for publishers, newsrooms, writers, and reporters, see here.

7. Offer concrete resources and services

There are robust tools, resources, and services that can help employees prepare for and mitigate the damage of online abuse, but many of these cost money. Consider covering or subsidizing the following:

  • Password managers (such as 1Password, Bitwarden and Dashlane): help employees protect their own and the organization’s accounts, including from hacking and impersonation;
  • Data removal services (such as DeleteMe Kanary, Optery): in the US and other countries with lax data privacy laws, these services comb the web and remove private information, such as home addresses and cell phones, which can protect from doxing, hacking, and impersonation;
  • IT support: in-house or third-party experts can help targeted employees tighten their digital security;
  • Mental health care: counseling can help address the toll online abuse can play on mental and physical health:
    • In the US and UK, ensure employees have access to professional counseling via their health insurance plan, Employee Assistance Program (EAP), and/or apps.
    • Help employees engage in anxiety-reducing practices with meditation and self-care apps. NOTE: Verify the privacy protections of any mental health app you provide your employees.
  • Legal counsel: Leveraging the law to mitigate online abuse can be an uphill battle, but there are specific forms of harassment—such as cyberstalking, non- consensual intimate imagery, and true threats of violence—that can be addressed through the judicial system, depending on your country. A lawyer can help determine whether there are legal remedies available. The Thomson Reuters Foundation collaborated with UNESCO, the International Women’s Media Foundation (IWMF), and the International News Safety Institute (INSI) to develop practical and legal tools for journalists, media managers, and newsrooms to respond to online harassment, covering the legal rights of the press in 13 countries. If you are based in the US, go to the Legal Section of this Manual.
  • Free guidance: including from PEN America, Trollbusters, Committee to Protect Journalists, International Press Institute, and Right to Be.

8. Moderate content

If your organization expects staff to express themselves via articles or organizational social media channels—that is, on platforms allowing for public commentary—you can protect them from harassment by tightening content moderation. While fostering open debate is important, it is also fair to define what you consider to be abusive and decide how such comments will be dealt with. Media organizations that have overhauled how they moderate comments, such as the Wall Street Journal and BBC Sports, offer a model. See Policies and Protocols below for more detailed guidance.

9. Encourage peer support networks

Online abuse is intended to be profoundly isolating, which is why it’s so important to give employees a safe space to vent, share experiences, and exchange strategies. Encourage staff to band together and create a peer support group, in the form of a Slack channel, chat group, or mentoring program. Ensure staff involved with peer support have adequate time and access to leadership to apply their hard-earned knowledge to help improve policies, protocols, and resources across the organization.

Given security concerns in some regions, not everyone will feel comfortable joining a peer support network. This is why you can also encourage staff members to build their own supportive cyber communities outside of work. One international example is Reuters, which established a peer support network in 2015 as part of its Global Trauma Program. The program provides 24/7 therapy services to Reuters staff and contractors struggling with work/life stress, anxiety, depression and trauma, including distress caused by online harassment.

10. Issue a statement of support

If staff are being harassed in response to their work, odds are high that the abusers want to push them out of professional spaces, intimidate them into self-censorship, or even damage their employer. The power dynamics between a lone target and an abusive (often coordinated) mob are extraordinarily uneven. Let staff know you have their backs by taking a stand against hate and harassment online. For examples of strong organizational support, see The Verge’s statement on attacks against Sarah Jeong and NBC’s statement on attacks against Brandy Zadrozny.

Policies and Protocols

Social Media Policy

If you expect employees to have a social media presence, you need a social media policy. In the media industry, many of these policies are prescriptive and prohibitive, focusing exclusively on what employees should not do on social media. A responsive and inclusive policy also offers guidance on how staff can navigate abuse. A social media policy should address the following:

  • The code of conduct for safe and appropriate work-related social media usage.
  • Public disclosure of personal views, including political views.
  • Retweeting or sharing other social media content.
  • How to respond to hate and harassment, including organizational expectations.

For sample social media policies that address online abuse, see NPR’s Ethics Handbook and The New York Times’ Social Media Guidelines for the Newsroom.

Digital Security Protocol and/or Policy

Employees rarely receive guidance for how to safely and privately use digital tools professionally, including email, messaging, search engines, and social media. Establishing a digital security protocol for employees can help prevent and mitigate certain kinds of online harassment. Your protocol or policy—which you should consider making mandatory—should include information about:

Headline-Writing Policy:

All publications rely on click-worthy headlines to generate audience interest and drive readers to their websites. But when headlines are written to be deliberately inflammatory or divisive, it’s the writer of the article—not the editor who selected the headline—who becomes the target of vicious online harassment. A clear headline-writing policy, which invites the writer’s input and takes into account their history with online harassment, can go a long way toward preventing harm.

Comment Moderation System & Community Guidelines (see Step #8 above)

If your organization expects staff to express themselves on platforms allowing for public commentary, you can protect them from harassment by creating clear community guidelines and enforcing them. Media organizations that have overhauled how they moderate comments, such as the Wall Street Journal and BBC Sports, offer a model. Here are some ideas:

  • Create clear community guidelines and post them publicly at the top of every comment section (for an example, see the Wall Street Journal’s Community Rules). There is evidence that posting moderation policies at the top of a comment thread can prevent certain kinds of online harassment from happening and also increase audience engagement.
  • Limit participation in comments sections to members/subscribers.
  • Limit the number of articles your audience can comment on and/or the amount of times they can comment.
  • Rigorously enforce guidelines to minimize abuse. Well-resourced organizations should bring on trained content moderators, while organizations with limited resources can experiment with a system in which coworkers moderate comments on one another’s articles rather than their own.
  • Prioritize transparency and communicate regularly and clearly with your audience to ensure they understand how and why your guidelines are being enforced.

Content moderation tools that use machine learning — such as the Coral Project and Perspective — can help human content moderators enforce policies more effectively at scale. For research on the efficacy of Coral in reducing toxicity in comments, see here.

Freelancer/Contractor Policy

If a freelancer becomes the target of online harassment as a result of something they’ve published or created for your institution, they deserve the same protection and support available to staff. How much support freelancers should be offered, and for how long, should be spelled out in a policy. Does an employer owe support to its freelancers for only three months after a contract ends? Six months? Three years? Should your company file police reports on behalf of a targeted writer if that writer is subjected to death threats in connection to their work? Should your institution be responsible for securing safe housing for a targeted writer? These are questions only your organization can answer, but they are absolutely worth asking. Having a freelancer policy in place, even an informal one, can help you evaluate what you owe a freelancer who is turning to you for support during episodes of online harassment. Here are a few general guidelines to follow:

  • Set up a method for evaluating the severity of a freelancer’s online harassment episode, and whether or not organizational intervention can help.
  • Set out a reasonable period of time your internal online harassment policies should apply to freelancers after their contracts expire.
  • Take a look at this excellent resource from the Dart Center for Journalism and Trauma and the ACOS (A Culture Of Safety) Alliance, Leading Resilience: A Guide for Editors and News Managers Working with Freelancers Exposed to Trauma, which outlines ways to provide support before, during, and after assignments.

For more guidance on the policies above and other policies relevant to protecting staff from online abuse, check out the International Women’s Media Foundation’s Guide to Protecting Newsrooms and Journalists Against Online Violence.

We encourage anyone working in the fields of literature or journalism to share the ideas above with your supervisors, HR departments, professional networks, and colleagues. These guidelines are not intended as instructions, but rather as a starting point for organizations to set up their own protocols, policies, and processes to address online abuse. 

PEN America has developed these guidelines through extensive experience leading training for thousands of individuals targeted by online abuse and working with over a dozen media organizations and publishers to improve policies and protocols to protect and support staff, in North America and internationally, in particular in LatAm, West Africa, East Africa, in the MENA Region and in Europe. We are grateful for the resources outlined in the International Press Institute’s Newsrooms OnTheLine project and the insights offered by Reuters, The New York Times, The Guardian, Politico, Vox, and many other media organizations working to protect and support staff facing abuse.