(Editor’s Note: This article is part of the Just Security symposium “Thinking Beyond Risks: Tech and Atrocity Prevention,” organized with the Programme on International Peace and Security (IPS) at the Oxford Institute for Ethics, Law and Armed Conflict. Readers can find here an introduction and other articles in the series as they are published.)

A series of videos began circulating widely across social media platforms in Mali in January 2022, featuring what appeared to be computer-generated voiceovers discussing domestic politics and France’s military presence in the country. As documented by France24, these synthetic media campaigns demonstrated part of an emerging pattern of AI being used to manipulate public opinion in conflict zones. The incident sparked international concern, with media-forensics experts noting how such technology could be deployed to create confusion about military operations and undermine trust in legitimate information sources. Furthermore, these incidents exemplify how the capabilities of generative AI may fundamentally transform conflict dynamics and security landscapes worldwide.

To understand these emerging challenges requires first grasping how generative AI differs from traditional artificial intelligence. While earlier AI systems could analyze vast sets of existing content, generative AI creates entirely new material by learning patterns from vast training datasets. Think of it as the difference between a film critic who can analyze movies and a filmmaker who can create entirely new films based on their understanding of cinematic techniques. Modern generative AI systems serve as these digital creators, capable of generating increasingly convincing text, images, audio, and video that can be nearly indistinguishable from human-created content.

This ability fuels new and dangerous forms of manipulation, from mass-produced propaganda to hyper-personalized disinformation. Yet, these same capabilities also hold promise for mitigating atrocities, enabling early detection, crisis response, and digital evidence authentication. Understanding both sides of this equation is crucial for crafting effective policy responses.

The most sophisticated AI systems today use “transformer architecture” that can understand complex contexts and relationships. These neural networks process information through multiple layers, each refining the output based on patterns learned from training data. The result is synthetic content that becomes increasingly difficult to distinguish from authentic material.

Threat Vectors

This technological capability creates three primary threat vectors that security professionals, policymakers, and legal practitioners must understand. The first is the mass production of harmful content at unprecedented scale and minimal cost. Consider how in Venezuela, investigators identified a coordinated campaign using AI-generated “news hosts” to spread government propaganda. These synthetic anchors appeared convincingly real across hundreds of videos, demonstrating how generative AI allows bad actors to flood information channels with deceptive content at a fraction of traditional production costs.

The second, more insidious threat comes from enhanced personalization and precision in propaganda delivery. Modern generative AI systems can analyze individual online behavior patterns to generate hyper-personalized content that aligns with specific beliefs and biases. This precursor to deploying this capability was demonstrated dramatically when deepfake robocalls impersonating then-President Joe Biden targeted voters in New Hampshire in January 2024 ahead of the presidential primary in which he was seeking re-election. The incident, the first in the United States of deepfakes being deployed in national politics, revealed how synthetic media may make propaganda more widespread. Content that is personally tailored via use of generative AI would also make potential harms coming from synthetic media all the more pervasive.

The third, and perhaps most concerning, threat vector is generative AI’s ability to broadly disrupt information environments through what researchers have called compositional deepfakes. This sophisticated technique embeds AI-generated content within layers of authentic material — Microsoft Chief Scientific Officer Eric Horvitz explains that, in a compositional deepfake, “a sequence of two fabricated ‘past’ deepfake media pieces are injected between two world occurrences and time-stamped as happening at appropriate times between the two events. Moving into the future in this canonical synthetic history, an in-world event is fabricated to complete the persuasive storyline.” In other words, fabricated past events are injected between real-world occurrences to shape a manipulated perception of history and complete a persuasive storyline. This creates an information ecosystem in which truth and falsehood are sown together and become increasingly difficult to unravel.

The “Liar’s Dividend”

These capabilities become particularly dangerous when combined with what experts call the “liar’s dividend” — a phenomenon defined by professors Robert Chesney and Danielle Citron, in which the mere existence of deepfake technology allows bad actors to dismiss authentic evidence as synthetic. Courts have encountered instances in which genuine evidence of grave violations was challenged as “AI-generated” by parties seeking to evade accountability. This creates a double-bind where synthetic content can both create false narratives and help discredit true ones.

The sophistication of these deception capabilities continues to advance rapidly. In Burkina Faso, following the military’s seizure of control, researchers identified AI-generated avatars posing as “pan-Africanists” and Americans promoting support for the new government. These synthetic personas demonstrated how generative AI enables the creation of artificial grassroots movements that can influence public opinion while obscuring their true origins.

Promising Tools

While the risks of abuse are serious, generative AI also offers promising tools for preventing and responding to mass atrocities. Research from the United States Holocaust Memorial Museum indicates that atrocities are preventable through early warning and action –  AI can enhance these prevention efforts in several crucial ways.

Consider how generative AI can strengthen early warning systems by rapidly processing vast datasets from social media, news sources, and satellite imagery to identify potential indicators of violence. When analysts monitor regions for rising tensions, they typically face an overwhelming volume of information. Generative AI can help by quickly identifying patterns in hate speech, social unrest, or other warning signs that might otherwise go unnoticed. This capability may prove particularly valuable in areas that are lower priority for collection and analysis, beyond well-known global flashpoints. The technology also shows promise in enhancing governmental capacity for atrocity prevention. The U.S. State Department’s internal AI system, StateChat, represented an early effort to streamline administrative processes and allow personnel to focus more on critical thinking about real-world problems.

In the realm of humanitarian response, generative AI has demonstrated potential for optimizing resource distribution and simulating response strategies. During crises, for example, the technology can analyze refugee movements and resource shortages to predict and optimize the allocation of aid. The DARPA Media Forensics program explored how these capabilities might be combined with human expertise to improve crisis response effectiveness.

Counter-disinformation efforts also stand to benefit from generative AI capabilities. While the technology can generate harmful content, it can also help address disinformation in real time by producing fact-based counter-narratives. Organizations like the Coalition for Content Provenance and Authenticity are developing standards for tracking the origin and modification history of digital content, offering hope for maintaining information integrity in an era of synthetic media. Generative AI can also enable the faster and cheaper scaling of content moderation systems and a platform’s detection systems, which can stop disinformation before it reaches the masses.

However, realizing these benefits while mitigating risks requires robust policy frameworks and international cooperation. The United Nations Office on Genocide Prevention emphasizes several key priorities for governments and international institutions. First, they must invest in rights-respecting detection technologies and provenance standards that can trace digital content’s origins without compromising user privacy and safety. Second, international standards for synthetic media creation, dissemination, and detection must be established, including specific protocols for handling digital evidence that may have been manipulated.

Legal frameworks must also evolve to address these challenges. The International Criminal Court has established new mechanisms for accepting digital evidence from the public, and even using AI to engage in analysis of evidence – but these efforts remain in early stages. Criminal penalties for the production and distribution of harmful synthetic media, starting with bans on non-consensual sexual imagery, need to be implemented consistently across jurisdictions. Additionally, diplomatic efforts must focus on creating agreements regarding the responsible use of generative AI in conflict situations.

For non-governmental organizations working in conflict zones, several practical steps emerge as essential. These include building AI media literacy through public education initiatives, organizing workshops on synthetic media awareness, and establishing community feedback mechanisms. Organizations should also focus on facilitating cross-sectoral dialogues and updating their internal practices to account for both the risks and opportunities presented by generative AI.

Looking ahead, success in leveraging generative AI’s benefits while countering its misuse will require unprecedented collaboration between governments, technology companies, civil society organizations, and affected communities. The goal should not be to restrict generative AI development, but rather to create frameworks that promote responsible use while ensuring that communities impacted by conflict remain at the center of decision-making processes.

The future of atrocity prevention may well depend on the ability to effectively navigate these dual aspects of AI technology. This requires not just technological solutions, but a comprehensive approach that combines technical innovation with human rights protection, community engagement, and international cooperation.

IMAGE: A phone screen displays a video featuring an AI-generated avatar depicting a TV news anchor on a fictional Venezuelan newscast available on YouTube called “House of News Español.” The newscast has drawn controversy for mis- and disinformation favoring Venezuelan President Nicolas Maduro’s ruling party. The subtitle shows the avatar is saying, “How true is it that Venezuela is such a poor country?” (Photo by Federico Parra/AFP via Getty Images)