(Editor’s Note: This article is part of the Just Security symposium “Thinking Beyond Risks: Tech and Atrocity Prevention,” organized with the Programme on International Peace and Security (IPS) at the Oxford Institute for Ethics, Law and Armed Conflict. Readers can find here an introduction and other articles in the series as they are published.)
In the dark days of the Taliban’s takeover of Afghanistan in August 2021, civilians across the country began going into hiding. At a speed that shocked Afghans and the international community, the militant group swept into Kabul and the government collapsed as its president fled. The Taliban swiftly began reimposing a draconian interpretation of Islamic law on a country that had inched steadily towards progress.
Amidst the ensuing withdrawal of the United States and its allies from Afghanistan, warning signs for mass atrocities were flashing red: a confidential United Nations report warned that the Taliban was going door-to-door hunting anyone affiliated with U.S. and NATO forces, threatening to kill or arrest family members if their targets could not be found. Reports swirled of kill lists being drawn up in Kabul, dead bodies left alongside the roads of Kandahar, and vulnerable groups going underground, among them Afghanistan’s female judges, LGBTQ individuals, and civil society activists. Policymakers in world capitals scrambled to coordinate evacuations and rescue operations, predominantly for foreign nationals, as well as a limited number of Afghans who had supported U.S. and NATO forces.
Observing these signs, atrocity prevention experts raised the alarm about increasing threats facing Afghan women and girls, as well as ethnic and religious minorities such as the country’s Hazara community. U.N. experts issued a joint statement rebuking States for their “silence and by-standing,” urging global action to prevent “civilian slaughter.” Among other measures, they called for sanctions, unimpeded humanitarian aid, and an emergency session of the U.N. Human Rights Council to deploy a fact-finding mission.
But it was becoming increasingly apparent that urgent measures were also needed to address emergent risks in the information environment, which were intersecting with physical security risks to pose grave threats to Afghans’ safety. Khalida Popal, the former captain of Afghanistan’s women’s soccer team, was among the first to urge Afghans to prioritize their digital security. Delete your social media, she urged players: “Today I’m calling them and telling them, take down their names, remove their identities, take down their photos for their safety… And that is painful for me, for someone as an activist who stood up and did everything possible to achieve and earn that identity.”
What Popal understood intuitively was that the dynamics of armed conflict in Afghanistan had gone digital. Even the Taliban, a group famously reluctant to embrace the digital age, was using “strikingly sophisticated social media tactics” during its takeover of the country. The group was well aware of how information on social media could be used to identify civilians for targeting and retaliation, setting up checkpoints around Kabul where they seized smartphones to assess the contents for evidence of Western affiliations. In the midst of an already tenuous security environment, social media presented a new landscape of risks for civilians, who began rushing to delete content from their social media profiles that could endanger them in a very different Afghanistan.
But at the same time, one of the most innovative efforts to protect Afghans from such digital risks came from a perhaps unlikely source: Facebook. The tool, called “locked profile,” was a novel, one-click feature that enabled Afghan users to immediately lock down their social media accounts, preventing anyone who wasn’t their friend from seeing posts on their timeline, or from downloading or sharing their profile photo. Facebook also removed the ability to view and search Afghan users’ “friends” lists to prevent people from being targeted through their affiliations. LinkedIn took similar steps, deploying an analogous feature to temporarily hide the connections of its users in the country.
While these features, at first glance, might seem a paltry contribution to civilian protection against the wide-ranging dangers posed by the Taliban regime, they illustrate how social media offers more than just risks. Digital spaces also provide a wealth of opportunities to support prevention strategies, and understanding the tools available within them can support policymaking that accounts for the new realities of atrocity scenarios.
Missing from the Atrocity Prevention Toolbox: Social Media Interventions
The field of atrocity prevention no longer assumes that intervention in an atrocity-risk setting is limited to either military intervention or no action at all. Over time, this “all-or-nothing” approach has been replaced by the idea of an atrocity prevention “toolbox,” comprised of legal, diplomatic, informational, and economic interventions that can be carefully combined and sequenced to support tailored prevention strategies. As articulated by the Simon-Skjodt Center for the Prevention of Genocide at the United States Holocaust Memorial Museum, “The concept of a toolbox is valuable in identifying a large number of actions that could be used to help prevent mass atrocities,” and can help to “counter the misconception that policy makers’ choices when facing a mass atrocity crisis amount to acquiescence or forceful intervention.”
While the existing toolbox includes some interventions that make use of the digital environment, atrocity prevention tools within the domain of social media have not been deeply explored. If scholars and policymakers are now familiar with the potential for social media platforms to contribute to the risk of mass violence, such as by spreading misinformation or inciting violence, there is far less awareness of how social media can help prevent atrocities, or of the specific interventions that might support prevention efforts. In a forthcoming report to be published by the United States Holocaust Memorial Museum, I aim to address this gap, exploring the landscape of social media product, policy, and operational interventions that offer potential to support core atrocity prevention strategies.
There is, in fact, a remarkable diversity of atrocity prevention tools available in the digital space, and several of them hold potential to support two well-known atrocity prevention strategies: protecting vulnerable civilian populations and degrading the capacity of perpetrators to commit mass atrocities. These tools can be grouped not only by the strategy they may support, but also into a typology based on their function and theory of change.
Supporting Civilian Protection
Facebook’s “locked profile,” for instance, can be considered an intervention for civilian protection. Its function is to support privacy by restricting the visibility of digital content that may put civilians at heightened risk. In many ways, these digital privacy interventions are a modern analogue to efforts throughout history to shield the identities of persecuted groups in moments of atrocity risk, such as when Jews refused to wear the compulsory Star of David during the Holocaust.
Other social media interventions that can support civilian protection serve different functions, such as supporting early warning efforts, or connecting social media users to crisis resources and credible information during periods of heightened risk. One way this can be implemented is by creating centralized landing pages or “information hubs” on social media platforms that compile reliable and authoritative information about emerging events. In March 2022, for example, Twitter developed centralized landing pages (called “Twitter Moments”) to share real-time news and resources related to the war in Ukraine. By September of that year, those pages had nearly 39 billion impressions. Similar interventions could aim to amplify content posted by reputable humanitarian organizations, such as posts relaying evacuation corridors or the location of food and medical aid distribution.
Social media interventions, of course, are not without risk to civilians – and context matters enormously. For example, privacy interventions like “locked profile” can help guard against some risks, like the ability to mine users’ friends lists for evidence of affiliations, but do little to protect users against other risks, such as being targeted on the basis of their surname or the color of the skin depicted in their profile picture.
And the creation of information hubs is only as valuable as the content shared within them. Given how distant social media companies can be from quickly changing events on the ground, they could easily endanger civilians if their interventions are managed poorly, or if they are designed and developed in isolation from those who understand atrocity risk dynamics and local context.
Degrading Perpetrators’ Capacity to Commit Atrocity Crimes
In addition to social media tools that can support civilian protection, there are innovative tools to advance another core atrocity prevention strategy: degrading perpetrators’ capacity to commit mass atrocities. Recognizing that perpetrators can weaponize social media, this category of tools aims to disrupt efforts to disseminate exclusionary ideologies, deceive communities, or incite violence through social media.
Such interventions include, for example, preventing perpetrators from establishing a large-scale presence on social media platforms in the first place, in the same way that other initiatives seek to deny perpetrators access to the global financial system or known weapons caches. For years, trust and safety teams at responsible social media companies have worked to deny violent actors from maintaining a presence on their platforms, leveraging both investigative techniques and policy tools to detect, disrupt, or de-platform such networks.
This category can also include interventions that provide additional context on inflammatory digital content posted by actual or would-be perpetrators to reduce their ability to persuade people of dangerous rumors or to incite violence. Pre-bunking initiatives, for example, can help “inoculate” social media users against common misinformation narratives and tropes before they encounter them in the wild. Similarly, placing warning labels over content can share context about what is depicted or asserted, blunting the impact of mis/disinformation. While these interventions may not prevent perpetrators from committing atrocities, such steps might reduce the ability to mobilize widespread community participation in mass violence.
Expanding the Toolbox
To date, relatively little has been publicly known about the existence and efficacy of many of the innovative tools and features social media platforms have deployed in crisis settings. More research and discussion are needed to understand these tools and their ultimate effectiveness in supporting atrocity prevention, so that policy decisions on the use of social media tools can be informed by data about both their potential impact and limitations.
Notably, many of the decisions about the design and use of these tools have been made in isolation from atrocity prevention experts and affected communities. Social media companies must not only make use of available tools and product features, but also invest in building internal atrocity prevention capacity and expertise, and engage with local communities to expand companies’ awareness of emerging risks. They should also prepare for moments of crisis, including through tabletop exercises and scenario-based simulations that can test the use of these tools and interventions – as well as their unintended consequences.
By bridging the opportunities presented by social media with the expertise of those well-versed in the dynamics of mass violence, the atrocity prevention field can expand its toolbox to include a rich new set of interventions.