From Christchurch to Buffalo, online political extremism often leads to real world violence. Unfortunately, governments’ efforts to address this danger tend to replicate the Greek myth of the Hydra: when the monster’s head is cut off, another simply grows in its place. The latest example occurred on April 23 when European Union lawmakers announced an historic agreement to push “Big Tech” companies like Google, Facebook, and Twitter to take greater responsibility for content on their platforms. However, simply taking down inflammatory material misunderstands the decentralized and highly adaptive nature of online extremist networks.
The Digital Services Act (DSA) aims, in part, to curb the distribution and dissemination of illegal, harmful content including misinformation, hate speech, and violent extremist content. The DSA aims to hold online platforms more accountable for their growing impact on everyday parts of society, ranging from their use as a source of news and information to their role in commerce and business. It builds on a growing suite of EU rules to regulate online platforms and tech companies including the General Data Protection Regulation (GDPR), Digital Markets Act (DMA), and Data Governance Act (DGA). The GDPR was a landmark piece of legislation that established regulatory power over online platforms by codifying user protections over their personal data. The DMA and DGA complement the DSA by working to also increase online transparency and fair business practices. This includes efforts to increase public-private data-sharing, reduce gate-keeping and entry barriers for new business, and preserve online competition by making sure high-traffic “core platform services” do not have an unfair market share. However, the DSA differs in its focus on what content platforms push out rather than how these platforms collect and use consumer data.
To achieve its specific content-related goals, the DSA codifies a series of new regulatory rules that require online platforms to be more transparent about how their algorithms work and holds online platforms more liable than Article 17 of the 2019 Copyright Directive if they do not swiftly take down content within a shorter timeframe. This entails the removal of video, pictures, text, or other postings that may pose a “societal risk” through their reference to sensitive and illegal materials such as hate speech, terrorist content, child sexual abuse material, or coordinated bouts of misinformation. While these issues have been regulated, in part, under the EU Charter of Fundamental Rights and Treaty on the Functioning of the European Union, it was often unclear whether online spaces, especially those run by non-European companies, fell under EU jurisdiction. As EU antitrust chief Margrethe Vestager put it, the DSA tries to fix this by ensuring that “what is illegal offline is illegal online.”
The DSA improves on previous regulatory efforts in several ways. It places responsibility on social media companies to address many of the problems they helped create, echoing the 2019 Christchurch Call to Action. It fines technology companies for failing to expediently remove harmful material, including extremist videos, images, and statements that could incite violence. It improves transparency standards about content moderation decisions and “provide[s] more insight into how online risks evolve.” The DSA also does not discriminate between different types of hateful content, ensuring social media companies have full and comprehensive authority to address a wide range of emerging harms such as disinformation, deep fakes, and coordinated inauthentic behavior .
Despite these improvements, the DSA is likely to have little effect on reducing online extremism. My co-authors and I recently published a report for the Stanford University Mapping Militants Project and National Counterterrorism Innovation, Technology, and Education Center (NCITE) assessing what progress has been made and what gaps remain. Based on our findings, the DSA likely will insufficiently address three major challenges with countering online extremism: (1) resource constraints; (2) the role of social media influencers; and (3) banning extremist content, while ignoring the tools and methods that individuals use to promote this content.
First, the DSA broadens the umbrella of responsibility beyond gatekeeper companies to include smaller, emerging firms that have insufficient resources to combat online extremism. While Alphabet, Meta, and Amazon may have the technical resources and moderators to respond quickly to hateful content, start-ups like Zello and Clubhouse will likely find these regulations onerous and difficult to implement.
These requirements can create a chilling effect on encouraging new business innovation, unraveling the DSA’s attempts to encourage more competition. Under-resourced moderators may only detect explicit dog whistles and phrases, enabling a glut of extremist content to slip by undetected as long as it avoids direct use of certain buzzwords.
Second, the DSA’s proposed content moderation approach overlooks the role of critical influencers embedded in social media networks. Algorithmic ranking changes can curb the promotion of hateful content, but it may not stop influencers from producing hateful ideas.
Social media influencers use long-form videos, including hour-long conversations on YouTube and endless social media threads to create a sense of perceived relatability and authenticity before introducing their more extreme ideas. Because they can command a large and attentive audience, influencers connect disparate parts of online extremist networks and subtly push viewers towards radicalization pipelines. Dan Bongino’s Secret Service career, for example, is used to bolster his credibility and expertise on the political innerworkings of Washington, DC. He then uses this reputation to introduce more fringe beliefs about the Biden administration, COVID policies, and crime rates. Dave Rubin uses his previous media career as a host on the progressive Young Turks show to present himself as a more moderate political voice even though he interviews well-known extremist personalities like Milo Yiannopolous or Stefen Molyneux, giving them a platform to promote more hateful beliefs.
As influencers push their audience towards “alt-tech” platforms, content moderation regimes fall apart. On these platforms, permissive terms of service and few content moderation rules provide an atmosphere for extremists to communicate and organize. Navigating the space between these influencers’ freedom of expression and their ability to promote this content remains underdeveloped and in need of further inquiry.
Finally, the DSA’s narrow focus on banning extremist content, rather than understanding the tactics and procedures that extremists use to promote this content, is misplaced. Extremists adapt quickly to new content moderation regulations like the DSA and develop effective evasion techniques. Although Facebook Live removed the video of the 2019 Christchurch shooting shortly after it occurred, many supporters posted copies of the recording to YouTube, LiveLeak, BitChute, and various archival sites. Content moderators struggled to redact and remove the various posts because users carefully manipulated the video by adding watermarks, changing the video quality, and re-recording the video to help evade easy detection.
Extremists also may hijack algorithmic ranking methods by framing content titles and descriptions to make sure their posts appear near the top of search results for non-controversial issues. As evidence, Tiktok far-right videos have been tagged with #conservative or #trump to appear alongside more conventional posts. Tiktok jihadist videos embrace a similar strategy to recruit new users, including the use of #cat to push their videos into more mainstream feeds.
Other evasion techniques include restricting comments to avoid potential reporting, communicating in private and hidden groups, creating multiple accounts, and using slightly misspelled hashtags to avoid algorithmic detection. An August 2021 investigation into extremist speech on TikTok found users quickly adapted to the banning of hashtags like #BrentonTarrant by using a slightly misspelled version of #BrentonTarrent instead. Similarly, while the Islamophobic hashtag #RemoveKebab was blocked, #RemoveKebob was not.
Even as former President Barack Obama called on U.S. policymakers to consider similar reforms to the DSA, it remains unclear what impact this might have. The DSA may not go into effect until 2024, providing extremists plenty of time to strategize on additional evasion techniques.
The threat of online extremism is constantly adapting to the latest regulatory intervention. Extremist rhetoric can incite violence with relatively little warning, as the shootings in Poway, El Paso, and Buffalo demonstrate. Indirect and loose connections between extremists can make it hard for law enforcement to identify broader conspiracies and effectively foil plots like the Jan. 6 attacks on the Capitol. Decentralized online networks allow for high levels of plausible deniability, stealth, and secrecy. The end result is an intelligence nightmare that makes online extremist structures highly resilient to content moderation policies.
As EU lawmakers look to finalize the DSA, they must consider these challenges and develop mitigation strategies. Our research highlights three promising avenues for strengthening legislation aimed at combatting online extremism: redirect methods, cross-platform coordination, and wargaming exercises.
1. Redirect Methods: One program to combat online extremism attempts to steer away potential recruits through redirect methods. This strategy aims to prevent radicalization by intervening early on to limit access to far-right messaging and marketing materials. A pilot program by Moonshot and Alphabet in 2019 helped redirect individuals searching for extremist content on Google. If an individual searches for extremist phrases or keywords, the Redirect Model generates ads with non-confrontational but informative content. The pilot was successful enough that in February 2022 Meta announced it would be launching a similar project on Facebook in Pakistan and the UK.
2. Cross-Platform Research and Coordination: The DSA encourages improved data-sharing and research into how online risks evolve, but does not specify any particular direction for this research. Much current research focuses on developing a content directory which archives various hashtags, images, and forms of media used by extremists. Hopefully, research will also study different platforms to improve our understanding of extremist Tactics, Techniques, and Procedures (TTP). TTP describe the behaviors, methods, and tools that extremists use to advance their interests and spread illegal content rather than the content itself.
The increasingly fragmented “splinter-net” and proliferation of “alt-tech” platforms means that extremists tend to spread out across the Internet; they do not solely operate on one site like Twitter, Facebook, or Reddit. This is significant because different sites are conducive to different TTP. Video streaming sites like DLive can facilitate fundraising efforts while Telegram and Facebook groups can enable attack planning and logistical preparations. Cataloguing which sites are more conducive to which purposes (e.g. recruitment versus intelligence-gathering) can help stakeholders more cost-effectively prioritize resources towards the most relevant online platforms. Understanding how extremists manipulate different platforms for particular ends can identify potential abuse vectors, direct resources, and develop counter-measures to mitigate those vulnerabilities.
3. Red Team Analyses and Wargaming: The DSA, along with other recent regulations, instructs platforms to remove illegal content within hours. To make this effective, online platform companies, researchers, and public stakeholders should engage in collaborative red team analyses or wargame exercises to react more quickly during crisis situations. Military wargaming helps states anticipate and prepare for adversarial behavior, and this method could be extended to help forecast and develop playbooks around certain high-probability online extremist scenarios like a targeted shooting, public rally, armed confrontation, or electoral event.
When government and industry effectively exploit, weaken, or even sever the connections linking extremist communities, then they can make concerted progress. To achieve this, policymakers should focus less on the extremist content itself and more on the processes and platforms that produce this content.