The controversy over steps taken by Facebook and Twitter to limit the distribution of a New York Post story that made false claims about Vice President Joe Biden’s son Hunter Biden is reverberating through political, media and technology discussions. Of the available options, the platforms were right to take action to stem the spread of a story apparently crafted from known falsehoods and highly dubious sourcing that may be linked to a foreign intelligence operation. But there may be consequences.
Republicans are furious. The Chairman of the Federal Communications Commission announced the following day he will open an inquiry into Section 230, the part of the Communications Decency Act that allows social media platforms to operate with little liability for what users post. The Senate Judiciary Committee announced it will subpoena Twitter CEO Jack Dorsey to testify about his platform’s actions on October 23rd, less than two weeks before the election, and Twitter scrambled to change its policies late Thursday night after receiving “significant feedback.” And Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Dorsey were already set to appear before the Senate Commerce Committee on October 28th, a hearing that will now likely be more fraught.
The issue of how these companies handle disinformation and political pressures has become burning hot. But for all the furor in the runup to the election, this incident may be more important for what it tells us about how the platforms will handle post-election stories and claims, and why they are still not adequately prepared to confront disinformation.
What Happened
On the morning that the New York Post published what it called a “Smoking-Gun” about Hunter Biden, the situation felt all too familiar. Emails, purportedly acquired by a then-anonymous laptop repair technician and through some suspicious chain of possession shared with Steve Bannon and Rudy Giuliani, had found their way into the news media — albeit through a Murdoch tabloid. The Post’s story framed its so-called revelations by repeating as if completely true the debunked claims about Burisma-Biden that the US intelligence community has identified as part of Russian disinformation campaign to interfere with the election. No wonder some observers had flashbacks to October 2016, when the release of hacked DNC and Clinton campaign emails obsessed the media in the final days of the campaign. By mid-morning, journalists and analysts had raised fundamental problems with key assertions in the story as well as with the provenance of the source material.
But then something remarkable happened. A Facebook PR executive announced in a tweet that the platform was reducing the propagation of the story and inviting fact-checkers to scrutinize it. Later in the day Twitter took even harsher action, disabling URLs linking to the story and temporarily suspending accounts — including that of White House Press Secretary Kayleigh McEnany and Politico journalist Jake Sherman — that shared its details.
Over twenty four hours later, even as the platforms continue to limit subsequent stories on Hunter Biden by the New York Post, the story of the platforms’ response has, in many respects, eclipsed the original Post story itself. Right-wing voicesalleged the platforms are meddling in the election. “This is a Big Tech information coup. This is digital civil war,” claimed Sohrab Ahmari, opinion editor at the New York Post. Republican officials are furious. President Donald Trump denounced the social media companies. Missouri Senator Josh Hawley sent a letter to the Federal Election Commission alleging Facebook and Twitter may have committed “egregious campaign-finance violations benefitting the Biden campaign.” GOP members of the House Oversight Committee released a letter calling on the committee to “hold an emergency hearing on Big Tech’s censorship and election interference.” House Judiciary Committee Republicans posted the New York Post article to a government website — Twitter promptly banned that URL as well, just as the Twitter account for the Trump campaign was suspended for sharing a video related to the Post story.
But the President’s partisans were not the only constituency with concerns. So were observers who have argued in the past that social media companies should, indeed, take steps to control the spread of disinformation but also to do so with clear, transparent, and widely understood policies set up in advance. While Facebook had announced in October that if the company has “signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checker,” the lack of transparency over the protocols and poor communications around the decisions involving the New York Post story shocked some. “Yesterday, when Facebook publicly acknowledged that it also reduces the distribution of potential disinformation using other methods, the company surprised not only its users, but also the IFCN community,” wrote Cristina Tardáguila, International Fact-Checking Network’s Associate Director in a column on Poynter. “What methodology do Facebook employees use in those situations? How do they identify what needs to be less distributed? What sources do they rely on to decide that something may be false? And… in those decisions, are the employees really nonpartisan?”
Politifact’s Editor-in-Chief Angie Holan responded to notices that Twitter had taken action because of concerns by fact checkers in a tweet of her own: “Who are these partners they speak of? Has Twitter partnered with fact-checkers without telling anyone? It would be news to me.” Later the company clarified it had taken action based not primarily on fact checking, but rather due to its policy on the use of material obtained from a hacking operation. But even Twitter CEO Jack Dorsey admitted his company’s handling of the situation was problematic. “Our communication around our actions on the [New York Post] article was not great. And blocking URL sharing via tweet or DM with zero context as to why we’re blocking: unacceptable.”
What’s Next
Social media platforms are already preparing for a potentially chaotic post-election period likely driven in part by legitimate delays in counting large volumes of mail-in ballots, and by a President who has stoked baseless conspiracy theories about voter fraud and about a left-wing “coup” for months. As NBC News reporter Brandy Zadrozny has noted, “a sizable online network built around the president is poised to amplify claims about a rigged election, adding reach and enthusiasm to otherwise evidence-free allegations.”
It’s not hard to imagine bad outcomes.
“There is, unfortunately, I think, a heightened risk of civil unrest in the period between voting and a result being called,” Facebook CEO Mark Zuckerberg told Axios in early September. “I think we need to be doing everything that we can to reduce the chances of violence or civil unrest in the wake of this election.” The handling of false claims and dubious information is of key concern, and platforms including Facebook, Twitter and TikTok have announced policies specifically concerned with false claims of victory and attacks against the legitimacy of the outcome.
But the post-election period is a potential pressure cooker situation. Unless there is an immediately clear victor, an outcome many experts expect is unlikely, once the clock starts ticking after polls close on November 3rd tensions will no doubt be high. Many experts are concerned about violence, so much more scrutiny is being given to platform policies (see Mozilla’s newly released analysis). The flow of information will drive behavior of the various parties across the country looking to find an advantage in the heat of the moment, and moves to limit the speech or statements of politicians and influencers or limitations or bans on the propagation of media allegations or user generated content that is considered dubious will be viewed in this context. So far, Twitter and Facebook have announced somewhat specific plans to contend with these issues, including standing up special war rooms to take fast action. TikTok, while not considered by most a significant source of political information, has announced its own plans to reduce discoverability of suspect content. Notably, YouTube and Reddit have not yet formally announced specific plans for the post-election period.
But how well thought through is any of this? Twitter, for instance, has already announced significant changes to its policy in light of the response to its handling of the Post incident. It’s not publicly known what process exactly led to the rapid change and which groups or individuals were consulted. But in a late night tweet thread, Twitter Legal, Policy and Trust & Safety lead Vijaya Gadde admitted the company had failed to update its 2018 protocols around how to handle hacked materials due to a sort of oversight. The original policy triggered removal of content, but the company had more recently “added new product capabilities, such as labels to provide people with additional context.” The company had simply neglected to revisit the old policy with the newer and more expansive techniques for addressing malignant content. On some level this kind of response shows Twitter is still treating this like a software problem — a defined conditional statement to be executed uniformly. When content is type A (hacked) and posted by X type of actors (the hackers themselves or their accomplices), do Y. But will that rigid approach work in the fog of the moment, three weeks and counting down to an election? These problems are much more amenable to the types of nuanced editorial judgments made in a newsroom.
A lot is at stake. If the social platforms take the kinds of actions after the election as they did in the case of the New York Post story, they run the risk of hardening perceptions they are putting their fingers on the scales. In the long run, this may feed into the persecution fantasies of right-wing extremists, and may affect the perception some will no doubt have of the legitimacy of the electoral result. There are three actions all social media platforms must pursue to avoid playing directly into these critics’ arguments and to avoid misperceptions:
1. Act swiftly, but very carefully. Social media platforms reducing the virality of a post or piece of content or removing a post or a piece of content, or suspending an account, must be certain to communicate precisely about the nature of the offense, referring to existing policies and the specifics of the online behavior that led to discipline. A mere tweet from a PR executive is also not enough and, on its own, can backfire. Substantiation is necessary from the start to blunt fears that decisions are partisan. The first announcement is likely to be the one that resonates the most. Make it count. Social media platforms should consider who is the appropriate messenger for these decisions, and internal organizational structures that reinforce public trust.
2. War rooms need press conferences. Social media platforms need to recognize they are playing a crucial role in our democracy, and that they need to anticipate the media cycle. Issuing an explanatory tweet thread or a press release is frequently not enough, particularly in a fast-paced environment. It is good that Facebook and Twitter, for instance, have promised to host war rooms after the election — but just as generals and other high ranking military officials often make themselves available to answer press questions in the early days of a conflict, tech executives should do the same. Standing press conferences should be held to share information in the post-election period — live, with real reporters asking real questions.
3. Precise protocols for working with partners need to be published in advance. How information will be shared with third-party fact checkers and how feedback from other entities such as news organizations will be incorporated into the decision process needs to be made public. Social media platforms should diagram the ideal workflow and make it available to scrutiny.
The furor over the New York Post story is in part a problem of the social platforms’ own making. It is clearly “not censorship to slow the sharing of misinformation on networked services,” as Joan Donovan, Director of Harvard’s Shorenstein Center said. But after a decade or more of taking a laissez-faire approach to content moderation, allowing lies, extremism and conspiracy theories to fester, many people are simply not habituated to the idea that social media companies may play such an explicit editorial role in deciding what content is allowed on platforms. Many of those who believe that social media has enabled the hyper-polarization that dominates American politics today may welcome a heavier hand in the moderation of false claims, but now is an extraordinary moment for these firms to suddenly assert themselves. The way in which these companies choose to act on their newfound concerns over the veracity of information pumping through their platforms will likely play a crucial role in determining whether that polarization declines after the election, or whether it is inflamed. Time is short to set up systems that handle these matters appropriately.