The white supremacist who attacked two mosques in Christchurch, New Zealand, created a Facebook livestream of his assault that spread at an alarming rate. At times, YouTube said, copies were uploaded to its servers at a rate of one copy per second. Facebook reported that it removed 1.5 million videos of the attack in the first 24 hours alone. These platforms and others nevertheless faced criticism for their inability to remove all copies of the video, which clearly violated their internal rules for acceptable content. But the technique they relied on in their takedown efforts can create a different problem: the mistaken removal of content such as news reporting or other protected forms of expression without sufficient transparency and controls.
YouTube, Facebook and other platforms scrambled to remove the Christchurch attack videos using a process known as “hashing,” where a piece of content is fingerprinted with a digital hash so algorithms can spot duplicates whenever they are re-uploaded. But hashing is essentially a pattern-recognition tool, and it can be fooled with small changes. By editing the video, speeding it up, adding coloring, or compressing the size, copies of the March 15 attack in Christchurch steadily made their way onto various platforms. To catch these slightly modified copies, platforms used other techniques, such as audio-recognition technology, disabling the ability of users to find video by upload date, and even suspending human review of items flagged for removal. Even so, the video will always live in many corners of the Internet.
At the same time, reliance on techniques like hashing creates problems for individuals and organizations seeking to document violations such as human rights abuses for research and other types of reporting. For instance, the Syrian Archive, a civil society organization that seeks to preserve evidence of human rights abuses in Syria, reported that over 100,000 of its videos were removed by YouTube’s automated tools.
As 41 civil society organizations pointed out in a recent letter to members of the European Parliament, there are serious questions and concerns about using a privately-administered hash database of “terrorist content,” to identify and remove the material. First announced in 2016 by Google, Microsoft, Twitter, and Facebook, the hash database is managed by the Global Internet Forum to Counter Terrorism (GIFCT), and at least 13 companies use it to identify “terrorist content.” According to the BBC, as many as 70 companies are thinking about doing the same.
What is in the Database?
According to the GIFCT, the database contains over 100,000 hashed videos and images flagged as “terrorist content” under platform Terms of Service or Community Guidelines. The problem is that nobody – other than the companies – knows what is in the database, whether the images in it actually meet even the platforms’ loose definition of terrorist content, and whether it is operating in an even-handed way or targeting particular voices.
The public and researchers know nothing about the terrorism hash database’s error rates, or how companies choose to act on content uploaded to the database. This is particularly troubling given the broad definitions of “terrorist content” used by the major platforms. Fionnuala Ní Aoláin, the United Nations Special Rapporteur on human rights and counterterrorism, has expressed concern about this lack of clarity, noting that Facebook’s approach to classifying organizations as terrorists is “at odds with international humanitarian law.”
As with any automated tool, mistakes will happen along the way. But there is virtually no public understanding about the number of appeals or legal processes that have been initiated after content was removed. Without this insight, it is difficult to understand the accuracy of the hash database, and whether the attendant tradeoffs to free expression are appropriately balanced.
Second, removals based on the hash database are unable to account for context. This means that automated removals may sacrifice important journalism, activism, and academic study in the name of expeditious takedown. The inability to differentiate between content used for terrorist recruiting and content shared to document news or abuses poses grave threats to the coverage of important world events.
Civil society groups have often called for human moderators as a means to prevent these takedowns, and opposed proposals such as the recent draft European Union regulation that rely on speedy, automated takedowns. At the same time, serious concerns – particularly among communities targeted by white nationalist violence – have led to pressure on platforms for precisely the kind of speedy, automated removals illustrated by YouTube’s decision to suspend human review in the immediate aftermath of the Christchurch massacre. While this may be an appropriate response to Christchurch, it should not be extended to takedowns in general.
Automating Overbroad Enforcement
Finally, the hash database threatens to automate overbroad and discriminatory enforcement, most often affecting Muslims. While the Christchurch attack is the latest example of far-right violence, platform efforts to combat terrorism have until now almost exclusively focused on removing ISIS and al-Qaeda posts and accounts.
A 2018 Facebook report, for example, only addressed its ability to remove ISIS and al-Qaeda content, and a study of Twitter’s takedown efforts found the same singular focus. In practice, this means that Muslims and people from Muslim-majority countries are the most likely to face content removals, erroneous or not.
Far right communities have considerably more support from politicians and are unlikely to face the same onslaught of automated removals. This perpetuates the reality that human review and automated review suffer from similar biases that disproportionately harm users from marginalized ethnicities, religions, and languages.
The Christchurch massacre demonstrates the ongoing threat of white nationalism, and the impossible task of fully removing terrorist content from the Internet. After years of public pressure, Facebook recently announced a ban on the explicit praise, support, and representation of white nationalism and white separatism. It also announced its plan to use automated tools such as hashing to target hate groups globally.
But it is unclear how the company will handle erroneous takedowns, which are almost inevitable. Nor do we know how Facebook categorizes hate groups, whether users can appeal, or if the company will make its list of hate groups publicly available.
The public and policymakers are looking for information on the platform response to the Christchurch attack. But such information – including the amount of content that was removed, and information about error rates and user appeals – should be available for the terrorism hash database as well. It should be incorporated into publicly-available, quarterly transparency reports and be supplemented by independent auditing of the terrorism hash database.
Frankly, these basic transparency measures are overdue.