skip to Main Content

If your institution relies on the work of writers or journalists (including freelancers), the following guidelines can help you determine a plan for safeguarding your employees by offering individualized support.

We encourage anyone working in the fields of literature or journalism to share this resource with your supervisors, HR department, professional networks, and colleagues.


For most of the internet era, writers’ online activities, including any harassment they incur, have been considered beyond the purview of the institutions that employ them—especially because many kinds of online harassment are so difficult to prevent. Some employers see online harassment as stemming from an employee’s personal use of digital spaces, and thus argue that it’s inappropriate to intervene. Others might see the value in offering employees some kind of institutional support, but are concerned about exposing the company to unforeseen liabilities should they adopt formal policies for addressing online abuse.

The ubiquity of digital publishing and social networking means that employers are increasingly leaning on their writers to utilize internet platforms to the company’s advantage. Whether your institution works with award-winning poets, freelance journalists, best-selling authors, or little-known bloggers, it’s likely the writing you publish exists somewhere online. Many editors and publishers now insist that their writers be active on social media during book releases and promotional periods and/or moderate the comments attached to the articles they write.

So when a writer is attacked online in a way that intersects with their professional life, employers have a responsibility to take the harassment seriously, listen to the needs of their writers, and, ideally, offer or help develop a plan of action for addressing the abuse.

Luckily, there are a number of steps employers can take not only to support their writers during episodes of online harassment, but also to help preempt cyberattacks and mitigate certain kinds of abuse.

The following guidelines have been developed based on PEN America’s 2017 Online Harassment Survey; interviews with newsrooms, universities, and individual journalists and researchers versed in issues of online harassment; and insights provided by The Guardian, The Coral Project, The New York Times, and other publications that continue to be publicly thoughtful about how to support journalists and writers facing online hate.

We hope that your institution will take these guidelines seriously. A writer who doesn’t feel safe or supported may decline assignments, self-censor, or even cease to write. These are outcomes that advocates of free expression and institutions trafficking in the written word cannot and should not abide.

Quick tips 

  • Take online harassment seriously and encourage your colleagues and subordinates to do the same. Stand in solidarity with your writers, be sensitive and empathetic, and encourage company-wide discussion about online harassment and how it can be stopped.
  • Reach out to your employees if you see them being targeted online—you don’t need to wait for them to come to you. Keep in mind that some individuals—based on identity or personal experience—won’t feel as comfortable calling attention to their experience of online harassment and might fear retaliation or increased scrutiny, so be discreet with your offers of help.
  • Involve targeted employees in every single decision you make on their behalf, especially if you’re considering contacting law enforcement. Writers of certain demographics and journalists who cover controversial subjects may have very good reasons for not wanting to involve law enforcement in episodes of online abuse.
  • Minimize the target’s exposure to online harassment by enlisting outside comment moderators, encouraging blocking and muting, publishing noninflammatory headlines, and encouraging senior leadership to intervene in online abuse as appropriate.
  • Encourage counterspeech efforts among your employees and at-large community. Your readers are a great resource: Encourage them to condemn online harassment and to promote civility in the comment sections to which they contribute.
  • Celebrate institutional diversity, especially in positions of leadership, and invite diverse voices to serve as decision makers when it comes to crafting company-wide online-harassment policies. Keep in mind that people of different demographics may have different experiences and points of view when it comes to interacting in online spaces and with law enforcement, which is why the perspectives of women, people of color, and members of the LGBTQ+ community are especially valuable.
  • Incorporate research into your policy making. In addition to any internal research your organization undertakes, look into what other research is available on the subject of online harassment. For example, researchers at Stanford and Cornell have discovered that people are more likely to troll in the evenings and early in the week. This information might serve to inform policies regarding when and how you open up comment threads to your readers.

Full Guidelines for Supporting Your Writers and Journalists During Online Harassment

1. Acknowledge, as an institution, that online harassment is a real problem with real consequences.

Too often, online harassment is dismissed as not being a “real” problem. Targets are told to toughen up and made to feel foolish and irrational, which in turn makes them more vulnerable to online abuse. The fact is, online harassment can negatively impact one’s professional life, personal life, and health. (To learn more about the impact of online harassment on writers and journalists in particular, see the results of  PEN America’s 2017 Online Harassment Survey and read through the real-life stories in this Field Manual.)

It is especially important that executive and senior leadership fully comprehend the impact of online harassment. Their knowledge of the issue will have the most impact on company policy and culture. If your institution is not yet ready to tackle this issue through formal policy and documentation, then, as a first step, request that your executive and senior teams become well versed in what online harassment is and the forms it can take. PEN America’s Defining “Online Harassment” is a good place to start.

2. Assess the scope of the problem inside your own newsroom/company/publication.

Conduct an internal, anonymous, company-wide survey to generate important data about how online harassment impacts your organization. Include not just your editorial employees, but also those who work in marketing, human resources, accounting, etc. You may be surprised to learn just how many lives are impacted by online harassment. The survey should seek to cover a variety of topics, including:

  • The frequency with which online harassment occurs.
  • The emotional, psychological, personal, and professional impact on targeted individuals.
  • Examples of the kind of online harassment your employees experience.
  • The ways in which your employees think your institution can and should help.

Once this data has been processed and analyzed, it will give you a sense of the scope and severity of online harassment. Equipped with this information, your institution can then strategize how best to implement practices and policies that will help your employees both preempt and respond to certain kinds of online abuse.

3. Draft a set of internal policies and practices that address the concerns raised in your survey.

These policies are likely to cover some combination (if not all) of the following categories:

  • Social Media Usage Policy: As writers/journalists, it’s likely your employees are expected (if not required) to engage in online discourse and/or maintain a social media presence. Having thoughtful policies in place can help ensure safe social media usage while also taking your employees’ needs into consideration. Newsrooms dedicated to remaining impartial on the topics they cover tend to be especially mindful of how their employees interact on social media. Social media policies generally address the following:
    • The code of conduct for safe and appropriate work-related social media usage.
    • Policies regarding an employee’s public disclosure of personal views on social networking platforms, especially their political views.
    • Policies on retweeting or sharing another user’s work.
    • Policies regarding how employees respond to ad hominem attacks versus  legitimate criticisms issued over social media.

In addition to the general usage policies outlined above, your social media policy should also encompass a variety of resources and responses that can be invoked during episodes of online abuse. Possibilities include:

    • Inviting targeted employees to disengage from social media for a period of time.
    • Inviting or encouraging targeted employees to employ blocking and muting tools against abusive users.
    • Appointing a department head or designated coworker to monitor a targeted employee’s social media accounts so that the target can take a break from the abuse.
    • Providing a targeted employee access to a security officer or member of personnel trained in online security who can help with the following:
      • Assessing that the target has effective cyber-safety measures in place, including but not limited to appropriate privacy settings and encryption security (when necessary.)
      • Determining whether or not threats made over social media should be escalated to law enforcement (with the target’s permission). See more information on safety and security below.

Note to Newsrooms

When it comes to social media use, some newsrooms have strict policies about the ways in which journalists express their views and respond to online attacks. The AP’s social media policy, for example, encourages staffers “to be active participants in social networks while upholding our fundamental value that staffers should not express personal opinions on controversial issues of the day.” NPR has a strict de-escalation policy with regards to how its employees communicate via social media: ”We shouldn’t SHOUT IN ALL CAPS when we’re angry. We shouldn’t take the bait from trolls and sink to their level.” These policies are meant to encourage employees to “think before they tweet,” which in certain scenarios may help to de-escalate online harassment before it goes too far.

  • Online Safety and Security Policy: Establishing a cyber-safety handbook for employees can be an extremely valuable tool for preventing and mitigating certain kinds of online harassment. Your handbook might include information about:

Physical Security: In rare situations, a writer or journalist facing online harassment could be at risk of physical harm. If your institution has security personnel, there should be a security officer designated to handle issues of online safety from a physical-security standpoint. This individual can be enlisted to help targeted writers and senior leadership assess whether or not an online threat is “true” and determine a plan of action to support and protect targeted writers. In some scenarios, a security officer may determine that a threat should be escalated to law enforcement, or that the company should hire a private security detail for a period of time. Targeted writers who have the support of their institutions are more likely to be taken seriously by law enforcement. Always involve targeted writers in every decision being made in regard to their security, as some individuals may have legitimate reasons for not wanting to involve law enforcement.

If your institution doesn’t have security personnel, consider training a member of senior leadership in issues of online safety, who can be called upon to help assess an online threat. In the rare instance in which a targeted employee doesn’t feel that it’s safe to go home, your institution should assist in securing safe housing for the targeted employee and/or pay for said employee to spend the night in a hotel.

Pro Tip

If an employee believes they may a target for doxing, offer to pay for them to have their online information scrubbed via an online service like DeleteMe or Privacy Duck.

  • Comment Moderation Policy: For digital news sources and content-hosting platforms that incorporate comments, there should be clear community guidelines and moderation policies in place. Community guidelines set the tenor of a conversation and lay out clear standards about what behaviors will and will not be tolerated. If a commenter is booted from a conversation thread or finds their posted comment removed, it should be evident from the community standards why this has occurred. Evidence shows that posting moderation policies at the top of a comment thread can even prevent certain kinds of online harassment from happening and also increase audience engagement.

One common error many publications make is asking or requiring writers to moderate their own comment threads. While, in theory, this seems like an efficient use of resources, in practice it exposes writers to threats of violence, noxious hate speech, and demoralizing ad hominem attacks—especially when the subject matter of the published article is particularly controversial or when the writer belongs to a certain demographic. Platforms that wish to make comment threads a part of audience experience should consider creating a schedule wherein coworkers moderate each other’s comment threads rather than their own, so that writers who are more likely to be subjected to online harassment are spared the ordeal of witnessing direct threats and hate speech made against them. Well-resourced publications might even consider investing in personnel for whom moderation is a full-time job. (Bear in mind, this can be an ugly and exhausting career, and additional wellness resources may be required to support this particular kind of employee.)

Another option some publications have started to pursue is moderation AI. The New York Times now incorporates a machine-learning technology called Moderator into its moderation practices, while The Coral Project’s moderation technology, Talk, aims to reshape moderation practices to create safer and more productive online conversations. Talk is already in use by a number of national newsrooms, including The Washington Post and The Wall Street Journal, and several regional newsrooms as well.

  • Headline-Writing Policy: All publications rely on snappy headlines to generate audience interest and drive readers to their websites—indeed for most publications, click-worthy headlines are the company bread and butter. But when headlines are written to be deliberately inflammatory or divisive, it’s the writer of the article—not the editor who selected the headline—who becomes the target of vicious online harassment. A clear headline-writing policy that invites the input of a writer and takes into consideration the writer’s history with online harassment can go a long way toward preventing harmful exposure.
  • Freelancer/Contractor Policy: This is a tricky one and really comes down to one question: What do institutions owe their non-staff writers? Freelancers contribute important material, give to the lifeblood of the company, and can become loyal, dependable members of the institutions they serve. As many business models evolve away from teams of staff writers toward cheaper and more nimble contract work, freelancers are increasingly relied upon to generate large loads of material without institutional benefits.

If a freelance writer is subject to online harassment as a result of something they’ve written for your institution, they arguably deserve the same treatment and security procedures that a staff writer would receive. But how much support they should be offered, and for how long, is something your institution will need to decide for itself. In many ways, material published online is evergreen: It might surface in an online Google search years later, subjecting a writer to online harassment all over again. Does an employer owe support to its contractors and freelancers for only three months after a contract ends? Six months? Three years? Should your company file police reports on behalf of a targeted writer if that writer is subjected to death threats in connection to their work? Should your institution be responsible for securing safe housing for a targeted writer? What is a targeted contract writer owed, and what is your institution capable of offering them, either formally or informally?

These are questions only your institution can answer, but they are absolutely worth asking. Having some kind of contract-worker policy in place, even an informal one, can help you evaluate what you owe a freelancer who is turning to you for support during episodes of online harassment. Here are a few general guidelines to follow:

    • Set up a method for evaluating the severity of a contractor’s online-harassment episode, and whether or not it merits institutional intervention.
    • Set out a reasonable period of time your internal online-harassment policies should apply to freelance workers after their contracts expire.
  • Wellness Policy: Online harassment can take an immense toll on one’s wellness and mental health, impacting an employee’s professional productivity and even their decision to write on certain topics and events. If your institution has the resources, consider implementing the following recommendations, which can help to improve mental wellness for employees targeted by online harassment:
    • Pay for a company-wide subscription to an app like Talkspace™ so your employees have access to mental health professionals during online harassment.
    • Pay for a company-wide subscription to an app like Headspace™ or Sanvello™ so your employees can engage in anxiety-reducing practices and monitor their mental states during moments of harassment-related depression and anxiety.
    • Designate an empty office as a safe place for employees to go when they’re overwhelmed by online harassment and need a break.