Social media companies couldn’t exist in their current form without content moderation. But while these jobs are essential, they’re often low-paid, emotionally taxing, and extremely stressful — they require exposure to horrific violence, disturbing sexual content, and generally the worst of what we see (or don’t see) online. Do they have to be? Sarah T. Roberts, faculty director of the Center for Critical Internet Inquiry and associate professor of gender studies, information studies, and labor studies at UCLA, details the evolution of this work, from patchwork approaches to in-house moderators and contractors to the current prevailing model, where generalist contractors work in call center–like offices. There are steps companies could take to improve this work, including providing better technology for moderators as well as better pay and more psychological support. But improvement, at present, is more likely to come from worker organizing and collective demand for better conditions than from the firms that employ the workers or the companies that need the moderation.

•••

When people talk about content moderation, they typically reference debates about free speech and censorship. The people who actually do the moderating, however, tend to be an afterthought. Their work is largely invisible — hidden away in call center–type offices — as they filter the deluge of user-generated content on social media platforms. Even though users don’t see them, these employees are the internet’s frontline workers, facing the worst of human nature one disturbing picture or video at a time. Without them, social media companies — and their ad-driven business models — likely couldn’t exist as they do now. Yet in addition to the constant exposure to unpleasant content, these jobs tend to be poorly paid, contingent, and full of pressure to perform quickly and accurately. But do they have to be so bad?

Sarah T. Roberts, faculty director of the Center for Critical Internet Inquiry and associate professor of gender studies, information studies, and labor studies at UCLA, is an expert on content moderation work. She talked to HBR about why these jobs are often terrible by design, what alternative models might look like, and how we might make these essential frontline jobs better. This interview has been edited.

When did content moderation become a job, and how has it evolved?

It started in the early days of social media; even MySpace recognized the need for a professional moderation staff in the early aughts. Around 2010, sites like Facebook were in their relative infancy, as was the idea that user-generated content could fuel engagement. But when you open the gate to let anyone upload pictures or videos, you get the gamut of human self-expression. Some of the results are inappropriate, disturbing, or illegal and certainly could pose a problem for a firm that’s concerned about liability or simply brand management. Companies initially used a patchwork approach to moderation: They had people in-house who were doing the work — both contractors and employees — but they also outsourced it to third parties, including other firms and digital piecework platforms, like Mechanical Turk, to meet demand.

When I was doing my doctoral degree at the University of Illinois, I read an article about people in rural Iowa who were working in what sounded like a call center — but instead of responding to service inquiries, they were reviewing whether the content that users were posting on social media conformed to a given site’s guidelines. At the time, I’d been using the internet for 20 years, and I’d never considered that a human intermediary might be making these decisions as a job, for pay. It dawned on me that if your job is to look for and remove things that are disturbing to other people, repeated exposure to that kind of material had to be difficult.

Today each social media company has its own, incredibly detailed set of policies that are always changing and being refined, and the call center model has really won out. Moderators responsible for implementing those policies work in large-scale, outsourced, industrialized, call center–like offices, often in regions of the world where labor is much less expensive than in the U.S. The companies favor this kind of relationship. It sets up a convenient kind of plausible deniability for some of the harms that can come from the work.

What does a content moderator’s typical day look like?

When you’re a frontline generalist working for a third-party contracting company, you typically come into a fairly sterile environment. You probably don’t have your own workspace, because there are three shifts, so someone else will be sitting in your spot when you clock out. You probably log into a proprietary system or interface developed by the company that needs the moderation, and you start to access queues of materials.

A flagged piece of content is served to you — there might be some contextual information, or there might be very little — and you make a judgment call about whether to delete the content based on your interpretation of the company’s policies. When you close the case, you get another one. This is your process throughout the day, working through the queue. Sometimes there’s specialization — someone may work specifically on hate speech or self-harm content, for example — but even the run-of-the-mill cases that generalists deal with can be pretty awful. Just think about the kinds of things that aren’t permitted on a site: Those are often the things that these folks see.

How have moderators you’ve talked to felt about these jobs?

One woman told me that she saw herself in the folkloric role of a “sin-eater.” The gist of this historical ritual is that a poor person spiritually takes on the sins of a deceased person, usually by eating a loaf of bread that has been passed over the corpse, in exchange for money. The woman felt that in doing content moderation she was, in essence, taking on the misbehavior, the cruelty, and the violence of others for pay; the metaphor really resonated with me. Other moderators made analogies to being a janitor or a trash collector — people who deal with the refuse, the detritus.

Is it just the content that makes the work hard, or do moderators face other challenges?

On top of dealing with the actual content, moderators are probably using an interface that hasn’t been updated recently, because engineering resources go toward developing new products and functionality for customers. Improving the experience for content moderators is never at the top of a company’s to-do list. For example, an old interface might respond slowly to moderators’ commands, preventing them from reviewing flagged content as efficiently as they’d like to. That means workers might have to look at an image for longer than is necessary or watch all of a disturbing video, which contributes to the stress of the job.

Another challenge is in how moderators are evaluated, which is usually on two metrics. One is productivity: how many cases you get through. The second is accuracy: If a supervisor reviewed the cases that you resolved, would they agree with your decisions? So there’s pressure to get your cases done quickly but also to get them right.

This work also pays poorly, especially compared with other tech-sector jobs. I think a lot of people go in thinking, “I’ll be working in social media, and that’s the start of a career.” But it’s bottom-rung work that is often denigrated and dismissed, and it’s incredibly difficult to do over the long term.

What about automation? Could software and algorithmic models do this work?

As the need for content moderation has grown, companies have invested in machine learning algorithms, natural language processing, and other automation tools. But instead of shrinking the number of people involved, this shift has actually expanded it. Companies need workers to annotate data sets of images or other kinds of media that will be used to train the tools, and human beings still have to check whether the algorithms got the decisions right.

There has long been an aspirational idea that we’re going to turn repetitive, disturbing, or unpleasant labor over to machines. Instead, what’s happened with content moderation is that this work has just been pushed out of sight. Automation doesn’t lend itself to moderation beyond rote cases such as spam or content that has already been identified in a database, because the work is nuanced and requires linguistic and cultural competencies. For example, does a certain symbol have special meaning or is it just a symbol? Someone might see the Black Sun, a Nazi symbol, as just a geometric design unless they were familiar with its context, as well as the context in which it is being deployed. Machines cannot match humans in this regard.

Are there any other alternatives to the current system?

If social media companies continue to employ the same business model they’ve been using, I don’t really see the need for the moderation role dissipating anytime soon. But many things could be done to improve the experience for workers: upgrading their technology, enhancing their user experience, giving them greater control over what they see, and letting them say “Hey, I need a break” without penalty. A lot of companies pay lip service to that latter practice, but many moderators have told me they don’t seek support because it means letting their boss know they’re having difficulty with a central function of their work. When I ask workers what they would change, they always say “Pay us more” and “Value us more.” There’s a widespread lack of recognition in terms of what these individuals do on behalf of the rest of us.

In sectors besides tech, recently we’ve seen workers unionize to demand improvements, and that could be an option for moderators if companies don’t make the job better. These people are providing a mission-critical service for their employers. Platforms make money by giving advertisers access to as many users as possible, which creates the need for content moderation as a brand protection, because no company wants its ads featured next to a beheading video.

A more controversial solution is to rethink the way social media platforms are designed. Should they really be empty vessels for people to fill up with invective, fights over politics, disinformation, propaganda, racism and misogyny, and other garbage? Is it actually a good thing for two billion people to be using the same communication network? There’s no preordained reason that the platforms look the way they do, other than the desire for scale and profit.

Are any platforms doing moderation well?

It’s hard for me to not romanticize the early internet. God knows I was outraged by many things and sometimes fought with people — all the behavior that we associate with online discourse today. But early internet communities did have governance models that people could opt in or opt out of. Some sites were anarchic, avoiding rules and having an “anything goes” approach; it’s important to recognize the absence of rules as a governance choice that influenced the culture of a site. On other sites, there were very strict rules, and if you didn’t follow them, you would be booted. Often in those cases people doing moderation had visible leadership roles and were curating a particular kind of cultural space with an authority granted to them by the users. There are still sites with similar models.

Consider Reddit. On each subreddit, or topic forum, you have community leader–moderators and clear rules about what is acceptable and how violators will be penalized. Each subreddit sets itself up differently, and users can decide whether to participate. Then, at a higher level, you have professional moderators who deal with things such as requests for information from law enforcement agencies or governments. This kind of model is in sharp contrast to the one in which content moderation is mostly invisible and unacknowledged.

What else have you learned from the moderators you’ve talked to?

Right now, a lot of the collective intelligence that moderators gain from being on the front lines of the internet is lost. There’s no effective feedback loop for them to tell their employers what they’re seeing, such as trends and warning signs. I’ve talked to moderators who were probably some of the first people outside the U.S. State Department to know about certain conflict zones, with nowhere to go with that information.

Business ideas could also be gathered from these workers around how to improve online rules or better serve the public. But because moderators are at a remove organizationally, and often geographically, and because communication lines with leadership are nonexistent or broken, they’re seen as automatons who enforce extant rules — not as valued employees with knowledge to contribute to the larger ecosystem. That strikes me as a real missed opportunity.