Editor’s Note: This essay is co-published with Tech Policy Press.
Barrett is the deputy director of the NYU Stern Center for Business and Human Rights, where he writes about the effects of social media on democracy. He filed an affidavit at the request of plaintiffs’ counsel in the lawsuit against Facebook in Nairobi, Kenya, mentioned in this article. The affidavit summarized the findings of a 2020 report on content moderation that Barrett wrote for the Center.
It seems like controversy about social media content moderation is continually in the headlines, often as it intersects with issues ranging from national security, disinformation and political violence to civil and human rights. Consider:
- On Tuesday, The Washington Post published a leaked copy of previously unreleased findings by investigators for the House Select Committee on the January 6th Attack on the Capitol in which the staff members found evidence that major social media platforms bent their content moderation rules to avoid penalizing conservatives in the run-up to the 2021 insurrection.
- Contending that social media companies are biased against conservatives, Republican lawmakers in Texas and Florida have enacted statutes forbidding major platforms from removing content based on viewpoint or politics. The laws have been challenged on First Amendment grounds, and the Supreme Court is expected to resolve conflicting rulings from lower courts.
- As part of his shakeup at Twitter, Elon Musk reportedly slashed the contract workforce that determines what material should be removed from the micro-blogging platform. Musk skeptics have noted that since his takeover last year, the volume of misinformation and vitriol on Twitter has surged.
- Meta, having earlier agreed to pay $52 million to settle a class action lawsuit filed by former Facebook moderators in the United States claiming their work led to psychological harm, now faces a similar suit in Kenya brought on behalf of African moderators working for an outside Facebook contractor. YouTube also has settled a moderator suit, and TikTok faces multiple court actions.
- This week, Facebook’s African contractor, a company called Sama, which is a co-defendant in the Kenyan case, announced that it is quitting the content moderation business, blaming the “current economic climate.” After receiving negative press detailing traumatized employees, Cognizant, another business services company, announced in 2019 that it would back out of its content moderation contracts with Facebook because the work wasn’t “in line with our strategic vision for the company.”
In varying ways, all of these developments draw attention to the problematic role content moderation plays as part of the social media industry’s pattern of outsourcing key functions. The headlines also bring to mind, or ought to, one way to begin to improve the flawed institution of content moderation — namely, by social media companies moving the activity in-house, a recommendation I made in a report published in June 2020 for the NYU Stern Center for Business and Human Rights.
I. A Brief History of Outsourced Content Moderation
Once, content moderation was done by small groups of social media employees, who killed content that just didn’t feel right. Dave Wilner, an early moderator at Facebook and now head of trust and safety at OpenAI, told me that in 2010, he and about a dozen others followed a one-page checklist of forbidden material. “We were supposed to delete things like Hitler and naked people,” he said. They took down posts “that made you feel bad in your stomach.”
At that time, Facebook already had about 100 million users, mostly in the U.S., and no one questioned whether a dozen non-specialists making gut calls was sufficient to handle content moderation. Early in their corporate histories, YouTube and Twitter gave their moderators similarly bare-bones instructions for what to remove.
As the number of users and daily posts grew explosively at Facebook, YouTube and Twitter, the platforms realized their tiny content moderation staffs couldn’t handle the increased volume of spam, pornography, hate speech, and violent material. They needed more moderators. “There wasn’t much debate about what to do, because it seemed obvious: We needed to move this to outsourcing,” Wilner told me. “It was strictly a business ops decision,” based on cost concerns and the greater flexibility outsourcing offered.
There was another reason outsourcing seemed so natural: the entire social media business model is based on outsourcing in various forms. Platforms already were relying heavily on users and civil society groups, acting without compensation, to report potentially offensive or dangerous content. Even more fundamentally, the social media industry depends on unpaid users to generate the vast bulk of material — from puppy pictures to political punditry — against which the platforms sell the advertising that comprises the lion’s share of their revenue. Content production was almost entirely outsourced; why not content moderation, as well?
Corporate culture played a role in the marginalization of content moderation. While platforms could not exist without it — they would rapidly devolve into cesspools of spam, porn, and hatred — moderation is rarely a source of good news. Tech journalists and the public focus on content moderation when it fails or sparks contention, not on the countless occasions when it works properly. “No one says, ‘Let’s write a lengthy story on all of the things that didn’t happen on Twitter because of successful moderation,’” Del Harvey, the platform’s former vice president for trust and safety, told me.
Tom Phillips, a former executive at Google who left that company in 2009, made a related point: moderation never became part of Silicon Valley’s vaunted engineering-and-marketing culture, he said. “There’s no place in that culture for content moderation. It’s just too nitty-gritty.”
As the major platforms continued to add users, misogynistic, racist, and antisemitic content proliferated. Social media companies gradually contracted for additional outside labor: reviewers staring at screens, making high-volume, split-second decisions on whether an offensive post or video should stay or go.
II. Explosive Growth of Content Moderation
In the mid-2010s, the companies began in earnest to use automated systems driven by artificial intelligence to flag and remove content that violated ever-expanding community standards. AI proved highly effective at identifying certain forbidden categories, such as terrorist incitement, but was prone to errors with other types: whether a post mentioning Hitler reflects neo-Nazi cheerleading or satirizes neo-Nazis, for example. The close calls– and there were a lot of them– still had to be referred to human reviewers.
The 2016 U.S. election— which saw Russian operatives pretending to be Americans spread divisive propaganda via Facebook, Instagram, and YouTube— spurred a surge in outsourced hiring, much of it in the Philippines, India, Ireland and other sites outside the U.S. In a February 2017 public essay, Mark Zuckerberg conceded that moderators of Facebook content “were misclassifying hate speech in political debates in both directions — taking down accounts and content that should be left up and leaving up content that was hateful and should be taken down.” He tried to deflect blame. “We review content once it is reported to us,” he wrote. “There have been terribly tragic events — like suicides, some live-streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day that our team must be alerted to before we can help out.”
Beginning in 2017, Facebook and its rivals added outsourced moderators at a brisk clip. This was a period when profits and stock prices were soaring. By 2020, 15,000 people were reviewing Facebook content, the overwhelming majority of them employed by outside contractors. About 10,000 people scrutinized YouTube and other Google properties. Twitter, much smaller and less financially successful, had 1,500.
Still, moderation scandals blew up on a regular basis. Some of the worst occurred outside of English-speaking countries, where platforms had sought user growth but hadn’t bothered to add reviewers who spoke local languages, let alone understood political and cultural dynamics far removed from Silicon Valley. Facebook’s role in fueling the ethnic cleansing of Myanmar’s Rohingya Muslim minority received heavy media coverage. For years, no more than a handful of Burmese-speaking moderators working on an outsourced basis from outside the country were monitoring a rising wave of anti-Rohingya vituperation on the platform. Other less momentous, but still horrifying, instances of Facebook being used to heighten ethnic and religious hatred and violence occurred in India, Sri Lanka, Ethiopia, and other countries.
All the while, evidence was accumulating — much of it gathered by academics such as UCLA professor Sarah Roberts and in compelling articles by Casey Newton, then a reporter at The Verge — that doing outsourced moderation work was leaving many employees with post-traumatic stress disorder or other psychological injuries. Paid modest salaries and afforded few benefits, many reviewers worked in chaotic offices run by incompetent managers who imposed unachievable numerical goals.
For my report in 2020, I interviewed a former moderator who worked for an outsourcing company in Dublin and described the declining quality of his work as productivity goals rose relentlessly. The pressure generally led moderators to leave up content that violated platform rules, he said. Managers lacked knowledge of the region he covered, the former Soviet republics: “They really didn’t know more than we did; sometimes, less.” Specifically, supervisors “didn’t know the slurs, the political tensions, the dangerous organizations, or the terrorist organizations.”
III. It’s Time to Move Content Moderation In-House
One might assume that major social media platforms would rethink their outsourced moderation strategies. But there is no sign this is happening. Reacting to the announcement that its African outsourcer, Sama, is getting out of the business, Meta reportedly shifted the work to a similar company. At Twitter, Musk has declared that he wants to see less content moderation and has drastically reduced the company’s overall headcount, decimating its trust and safety department. Meta and Google have also pared their payrolls, citing reduced advertising revenue in a slower post-pandemic economy that could slide into a recession.
Content moderation presents truly difficult challenges. Before the 2010s, corporations had never confronted a task quite like it. Some of the controversy surrounding moderation, including the enactment of dubious state laws meant to tie the hands of moderators, is beyond the companies’ control. Other points of contention, such as the platforms’ hesitant response to the campaign by former President Donald Trump and his allies to undermine the 2020 election, illustrate how social media companies find themselves mired in the political polarization they themselves have helped exacerbate. But the inadequacy of so much moderation stems directly from the demands of the business model chosen by the major social media companies: unremitting user growth and engagement that is meant to please both advertisers and investors. But enormous scale has produced vastly more content to moderate and more permutations of meaning, context, and nuance, all of which invite error. This is the industry Zuckerberg and his counterparts built, and they are primarily responsible for policing it.
What these companies should be doing is bringing human content moderation in-house, where reviewers could receive counseling and medical benefits, compensation, and — crucially — expert training and supervision that is commensurate with that of other employees performing important corporate roles. With competent oversight, moderators could do their inherently stressful job in a calmer, more orderly environment, with the breaks, assignment shifts, and advice they deserve. And, their morale might improve if they had a real opportunity for promotion within their social media employer — an option currently available to relatively few outsourced reviewers.
An in-housing strategy would be a hugely expensive proposition at a time when social media companies are retrenching. But making the financial numbers work is an obligation for those in senior management who have profited so handsomely for the past 15-plus years. It is past time to stop farming out content moderation.