This article is co-published with Protego Press.
On Wednesday, Facebook released new information about how the video of the recent terrorist attack in Christchurch, New Zealand, circulated online. A Facebook blog post says:
- “The video was viewed fewer than 200 times during the live broadcast.
- No users reported the video during the live broadcast.
- Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook.
- Before we were alerted to the video, a user on 8chan posted a link to a copy of the video on a file-sharing site.
- The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended.
- In the first 24 hours, we removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.”
There is much to digest here, and much more surely to come from technology companies, governments, and researchers in coming months. But Facebook’s new disclosures prompt four initial thoughts.
First and most simply, Facebook has made an effort to say more about its role in the spread of videos of the horrific violence in Christchurch—and to do so relatively quickly. Having been criticized for how slow and reluctant the company was to share information on the spread of disinformation during the 2016 presidential campaign cycle, the company is clearly taking a more proactive approach here. That’s to its credit, even if there are still more questions some will justifiably want answered, such as the full extent of Facebook user engagement with the uploaded videos and the average length of time these videos remained on the platform before being removed. And it’s worth keeping in mind the possibility that law enforcement (in the United States or elsewhere) might be asking the company not to share certain information while the attack is being investigated.
Second, Facebook’s disclosures demonstrate powerfully the cross-platform complexity of how violence spreads online. Facebook’s report reaffirms earlier indications that approximately 200 people watched the attack in real time and none reported it, and suggests that the audience of 200 wasn’t assembled using Facebook itself given the absence of any discussion of that initial step. That, in turn, prompts the question of how that audience was assembled, and suggests the possibility that it might have amassed through a closed chat group on an end-to-end encrypted application like Signal, Telegram, or WhatsApp (itself owned by Facebook). That’s one way in which the spread of this vile material crossed platforms. Another way emerged once the video was no longer live but recorded: Facebook’s disclosures indicate that, even after Facebook essentially cleaned its platform of the recorded video, it was shared on other platforms (like YouTube), then reappeared on Facebook as it was uploaded there once again, in modified forms. All of this drives home the key point that addressing today’s content policy challenges can’t be viewed as a problem that any one company can solve on its own.
Third, Facebook’s report updates the company’s earlier estimates of how many distinct versions of the video Facebook has shared via the database jointly maintained by a number of tech companies through the Global Internet Forum to Counter Terrorism. It’s remarkable that nearly 1,000 variations of the video have been created so quickly. That exceeds the rate at which jihadist groups like ISIS and their supporters tend to create versions of their video materials, perhaps as those groups seek to maintain stricter “brand control.” And it shows how diffuse this challenge has become: Presumably these variations of the video are being made—rapidly—by individuals distributed globally, wholly unconnected to the Christchurch attacker, and yet intent on spreading video footage of his violence. Accentuating that point even further is the astonishing pace at which copies of the video were uploaded, leading some experts to speculate that the video’s rapid amplification was, in part, the product of a more deliberate, coordinated campaign. If that proves true, it would show a more tightly linked global community supporting white nationalist terrorism than has been seen before, adding to the increasingly international nature of what has long been called “domestic terrorism.”
Fourth and most fundamentally, much of the commentary over the past week has lumped together the real-time broadcast of the violence in Christchurch with the continued availability and indeed spread of recordings of that violence after the attack was over. Those are, of course, related challenges insofar as they involve the same basic content, but they’re distinguishable in key ways, and conflating them muddies rather than clarifies the problem and its potential solutions. The ability to broadcast in real time has become entrenched not only on Facebook but also on other leading communications platforms like Twitter, and figuring out how, if at all, to moderate that real-time content is a particular (and particularly difficult) challenge. One possibility would be for companies like Facebook to introduce a delay in broadcasting much as network television utilizes. But that has its downside, too, such as by inviting authoritarian regimes to demand that companies use that delay to block dissidents before their voices can be heard. Plus, it’s not clear how effectively violence can be caught through automated review in just seconds or even minutes of delayed broadcast. In contrast, the challenges posed by content that has been recorded and is now being maintained, recycled, and even repackaged are, in key respects, distinguishable. They include questions about what content should be removed; what role computer algorithms versus human review should play in making those removal decisions; what content should be blocked—or at least delayed—before being uploaded in the first place; and what other content should be offered via algorithm to a viewer as something to watch next.
These are hard, important, and urgent questions. Facebook’s disclosures on Wednesday provide more fodder for discussing and debating them, and that is, itself, valuable.