(Editor’s Note: This article is part of the Just Security symposium “Thinking Beyond Risks: Tech and Atrocity Prevention,” organized with the Programme on International Peace and Security (IPS) at the Oxford Institute for Ethics, Law and Armed Conflict. Readers can find here an introduction and other articles in the series as they are published.)
In what appears to be a brightly colored children’s cartoon video with prancing unicorns and lullabies, bold text flashes across the screen in all caps: “We know that your child cannot read this. We have an important message to tell you as parents.” The lullaby stops, and another message appears: “40 infants were murdered in Israel by the Hamas terrorists (ISIS)… [music resumes] Now hug your baby and stand with us.”
Created by Israel’s Foreign Affairs Ministry, the video was published on the ministry’s official YouTube channel days after Hamas committed its harrowing atrocities in October 2023. The video – and others that similarly defend Israel’s bombardment of Gaza – appeared not just on Israel’s own official social media channels, but as targeted advertisements across social media, YouTube, U.S. streaming services like Hulu, TV channels featuring the 2024 Super Bowl, and children’s video games. Some of these ads featured gruesome images from the attack itself. Others, like those on Hulu, showcase a Gaza “without Hamas” – “five star hotels,” “stunning beaches,” and “charming boardwalks” – before cutting to a child holding a weapon under Hamas rule.
The apparent intent of these State-sponsored ads was to generate moral outrage and support for Israel’s “Operation Swords of Iron” in Gaza, which has displaced 90 percent of the population and killed more than 50,000 Palestinians (according to the health ministry and based on conservative estimates). According to Politico, in the initial days after the October 7th attack, the Israeli Foreign Affairs Ministry ran 30 such ads on X that reached 4 million users, as well as 75 ads across YouTube. Additionally, following accusations of involvement with Hamas, the Israeli Government Advertising Agency bought Google Search ads to discredit the UN’s Relief and Works Agency for Palestinian Refugees (UNRWA) in August 2024. Leveraging the online environment further for pro-war content, Israel’s Ministry of Diaspora Affairs also paid millions of shekels ($2 million USD) to STOIC (an Israeli consulting firm) for an influence campaign targeting both U.S. lawmakers and “mostly younger, progressive Americans.” The tactics included a website that labelled U.S. universities ‘safe’ or ‘unsafe’ for Jewish students and using “hundreds of fake accounts that posed as real Americans on X, Facebook and Instagram to post pro-Israel comments.” These ads have been occurring against the backdrop of claims from pro-Israeli social media accounts of ‘Pallywood propaganda’, where Palestinians are accused of faking their suffering.
The use of social media posts and ads to build support for war is, of course, not unique to the Israel-Hamas war, which reached a (now-broken) ceasefire agreement in January 2025. In this article, I am not taking a stance on the ethics of using propaganda advocating for the resort to war – that issue is ripe for further discussion among just war theorists. Rather, my point is this: as State-sponsored messaging and targeted ads defending war unfold online, neutral international organizations must amplify their existing efforts to educate the public about the just conduct of war – including, I argue, by disseminating targeted social media ads.
Specifically, neutral international institutions, like the United Nations and the International Committee for the Red Cross (ICRC), should build on existing efforts to educate the public and fighters on the front lines via targeted online content (not just social media posts) that promote the ethics and law of conflict. Such targeted ads on social media ought to encourage compliance with international humanitarian law (IHL, or the law of armed conflict, LOAC). This should occur as soon as violence breaks out and as the conflict unfolds.
Of course, the ICRC and other organizations already do an excellent job of posting content that advocates for compliance with LOAC on their platforms. The ICRC consistently posts about IHL on its X platform, including, for instance: interviews with experts, content specific to certain crises, the cardinal principles distinction, proportionality, and precaution, and the illegality of intentionality targeting health facilities.
Moreover, the ICRC has even released an interactive film – “If War Comes to You” – which highlights war from the perspective of a civilian, soldier, and aid worker, and prompts users to learn more about IHL. Yet it is imperative that information pertaining to IHL is targeted to those who may (likely) not “follow” the ICRC, or whose algorithms may not naturally display ICRC content. Indeed, digital platforms such as YouTube, Instagram, Facebook, TikTok, Snapchat, Google, and X present a hitherto neglected way to circulate targeted content about LOAC, getting IHL information behind the front lines and into individuals’ pockets.
Deploying these ads is especially relevant today, as combatants are notoriously bringing their personal devices into the battlefield, documenting their fight in real time, attempting to be humorous in war, posting potential evidence of war crimes, showcasing pictures of themselves in uniform on dating apps like Tinder, or thanking followers for donations used to buy war supplies. As combatants sit in the trenches – scrolling, posting, “liking,” taking selfies – they can be targeted with online ads that espouse the laws of war.
A Proactive Approach: Calling for Compliance with IHL
Elsewhere, I have urged States (together with NGOs, thought leaders, grass-roots organizations, and tech companies) to take proactive action to prevent and mitigate atrocity crimes via cyberspace, notably by developing and launching (what I refer to as) targeted information campaigns. These are bespoke ads or tailored messages that advocate respect for human rights and restraint from unjust violence. Similarly, targeted IHL ads could form part of such messaging campaigns that appear on social media feeds of individuals – combatants and noncombatants alike – in conflict zones.
I will return to the important questions of who develops, distributes, and pays for these ads below. Suffice to say here: social media platforms should allow such ads to appear on all relevant users’ feeds. Currently, social media users are able to adjust or “opt-out” of personalization ads on their platforms. But when it comes to ads calling for compliance with IHL, the function to “turn off,” “unfollow,” or “hide” such advertisements should be disabled. Moreover, IHL ads ought to be “prioritized” to the top of individuals’ newsfeeds during the conflict. Neutral organizations could also pay corporations to “boost” such messages and expand the targeting to “LookAlike” audiences or “doppelgängers” – users with similar traits to those already being targeted.
Because combatants bear legal and moral obligations to observe IHL, users identified as combatants – such as those who list their occupation or post content of themselves fighting – should be consistently targeted with such ads. As I note elsewhere, these targeted messages serve two functions. First, they can ideally help inoculate combatants against following illegal orders or committing atrocity crimes. Second, there is no way perpetrators could then later claim they were “unaware” that certain acts – such as mutilating corpses, inhumanely treating or abusing prisoners, or committing perfidy – are international crimes). Such targeted ads would also help bulwark against soldiers trying to assert they were simply following (illegal) orders from their superiors
While there is no parallel expectation of civilians who are not directly taking part in hostilities, those in conflict-affected areas should still receive tailor-made IHL ads to enhance their awareness of what is permissible or not permissible in war, especially when they are caught in the crossfire. This approach ensures that civilians are aware of their rights in war and can ideally help them better understand the consequences of any violations committed by armed forces.
Disseminating such IHL content is particularly urgent today given the increasing involvement of civilians in war, which makes distinguishing between civilians and combatants more difficult. Indeed, it is possible that civilians, through their online and offline activities, can become direct participants in hostilities (DPH) and forfeit their protected status – even if it is for just such time as they take a direct part in the hostilities).
Regardless of their status, any civilian user who posts about a conflict should receive IHL ads to improve their understanding of the laws that govern the means and methods of war. To be clear, it remains crucial to strictly distinguish between civilians and combatants when it comes to intentional targeting with (lethal) force. But that is wholly unrelated to the issue of who should receive information pertaining to war’s lawful conduct.
How Targeted IHL Ads Might Appear on Social Media
Much like any other social media advertisement, targeted ads regarding IHL may appear as images, videos, memes, or audio, and can be sponsored by influencers working with the ICRC or the UN. Crucially, the content must be tailored to the socio-cultural context of the conflict, rather than a top-down, Western-centric, or hegemonic projection of liberal democratic values. This ensures maximum resonance with the intended audience, rather than simply listing the laws of war. For instance, as the 2018 ICRC “Roots of Restraint” project highlights, concentrating strictly on IHL is “not as effective at influencing behaviour”; instead, the ICRC finds that “linking the law to local norms and values…gives it [the law] greater traction.” Thus, the report recommends “finding innovative and locally adapted ways to reinforce norms of humanity… including via digital media.” Using online platforms to share culturally specific translations of IHL is one such way to help promote adherence to LOAC. That said, care needs to be taken to ensure that such tailoring or translation of IHL into cultural contexts does not allow for actors to plausibly reinterpret the law in a way that suits their political agenda but is unlawful. All messaging must adhere to the central tenets of LOAC, even when it is tied to local norms and values.
If this idea sounds radical or unrealistic, a different crisis has provided proof of concept. During the COVID-19 pandemic, Meta users were consistently targeted with informational ads (especially at the top of newsfeeds) about practicing safe physical distancing, washing hands, wearing facemasks, and getting vaccinated. And whenever anyone posted content on Facebook or Instagram referencing COVID-19 or vaccines, Meta enabled an automatic popup to link readers to information from reliable sources like the World Health Organization (WHO) or the Centers for Disease Control and Prevention (CDC). As of April 2020, Meta also started sending “messages in News Feed to people who liked, commented on or reacted to posts with misinformation that [it] removed for violating” its policies and has since “redesigned these [messages] as more personalized notifications to more clearly connect people with credible and accurate information about COVID-19.” Creatively designed online ads promoting adherence to IHL should follow suit.
Who Designs? Who Distributes? Who Pays?
International organizations, particularly the United Nations and the ICRC – are likely to be the most credible and neutral sources for promulgating information about IHL. To be maximally effective, these messages should also ideally be relayed or reshared by influential or respected personalities, such as political, religious, or community leaders. Even celebrities and social media influencers with large followings could be involved, so long as they disclose they are working with the UN or the ICRC.
As for paying for such targeted ads, ideally social media companies should allow such IHL content (sponsored by the UN and the ICRC) to be disseminated on their platforms without charge – essentially, as a public service. Again, there is precedent for rolling out life-saving ads for free in a crisis: the COVID-19 ads noted above were largely sponsored by the platforms themselves. For instance, Meta founder and CEO Mark Zuckerberg announced that the company would provide the WHO “as many free ads as they [sic] need” to “get out timely, accurate information on the coronavirus” and combat misinformation about the virus. In the same post, Zuckerberg said the company would also give “support and millions more in ad credits to other organizations,” like the CDC and the U.N. Children’s Fund (UNICEF), and “[work] closely with global health experts to provide additional help.” Soon after, Google announced it would increase its original $250 million in free ads to WHO and offered an additional $340 million in free ads to small businesses in light of the economic blow from COVID-19.
Given recent developments in the technology sector, it is doubtful whether such altruism for providing public safety initiatives will be forthcoming. Unlike in 2020-2021, the voluntary cooperation of the social media platforms in allowing free targeted ads for social good now appears doubtful. Since assuming control of X, Elon Musk has utilized the platform to promote and defend current US President Donald Trump and his policies. Musk is highly unlikely to allow targeted IHL ads on X explaining that such mass expulsion or ethnic cleansing is illegal under international law. Similarly, in January 2025 Mark Zuckerberg announced Meta would be removing third-party fact-checking for posts on Facebook, Instagram and Threads in the United States, instead relying on “Community Notes” (similar to X).
But social media companies have a moral obligation to freely provide targeted ads advocating for compliance with IHL on their platforms and to prioritize pro-IHL content in areas of the world wracked by conflict. This ethical duty is especially pronounced, as tech companies of course profit from both the vulnerable population’s and perpetrators’ use of their platforms in crisis zones. Corporations absolutely know how to privilege certain content; they intentionally manipulate algorithms to encourage engagement and derive profit. The question of whether they will deploy targeted IHL ads across social media is a separate question. If tech companies refuse to allow targeted IHL ads freely – as I argue they ought to – then perhaps international organizations simply need to pay them to do so.
Moreover, although it is hard to definitively say whether targeted IHL ads would definitively increase combatant compliance with international law, there are four reasons why this strategy should be pursued. First and foremost, if platforms are allowing States to circulate pro-war ads irrespective of their effectiveness in mobilizing civilian sentiment, they should also prioritise targeted ads and content that encourages individuals to comply with IHL in those wars.
Second, personalized, targeted social media advertising is a burgeoning industry, generating hundreds of billions of dollars for social media companies. Presumably, advertisers overwhelmingly choose to pay for targeted ads on social media because they are effective: in 2022, 37.9 percent of people surveyed said they had purchased something after being exposed to an ad 1 to 25 times. In a different context, there are mixed results on political advertising on social media for voter preferences. Of course, purchasing a product, influencing voter preference, and abiding by international law in war are entirely different matters. But this data suggests at least some potential for persuasion.
Third, such ads may help create the following logic: if combatants are consistently being targeted with ads calling for adherence to IHL while in the field, they may feel as though they are being “watched,” which could induce greater compliance. (Indeed, many people feel uncomfortably surveilled through our devices, including when targeted with ads for products).
Finally, launching targeted IHL ads is a cheap option to explore – one that the international community and States committed to international law cannot afford to ignore. In light of the bloodshed from Gaza to Goma, the international community should consider deploying targeted IHL-ads, in addition to continued UN Security Council condemnations and pleas for cessation in hostilities.
Beyond Doom Scrolling: Signaling a (Re)Commitment to IHL
Wartime propaganda has a muddied history. The threat or use of force against a State is a clear violation of the U.N. Charter unless authorized by the U.N. Security Council or a lawful exercise of self-defense under Article 51. Yet, online lobbying for (discriminate and proportionate) violence against combatants may be ethically and lawfully permissible. For instance, shortly after the 2022 full scale invasion of Ukraine, Facebook permitted calls for violence against Russian combatants, with phases like “death to Russian invaders,” despite such rhetoric typically contravening their community standards. Arguably, this is because combatants are liable to be targeted with kinetic force in war, assuming all other IHL rules are met and that the targets do not take on a protected status such as hors de combat.
In addition to pro-war ads and content calling for violence against combatants, there appears to be little normative aversion to States using digital posts or ads on social media to recruit soldiers for war – this century’s equivalent of the iconic American poster “I Want YOU for the U.S. Army” from World War II. In fact, the dissemination of online ads justifying war under certain circumstances, such as self-defense, like the case of Ukraine, seems to be broadly accepted. What has not been given much thought, however, is the case for circulating online ads that promote compliance with the laws of war.
Combatants – and the civilians on whose behalf they fight – are owed constant reminders of their duties, privileges, and protections in war. Social media’s ubiquity and its highly curated advertising make it a potentially powerful conveyer of international humanitarian law. Amid such bloodshed and evidence of war crimes, the international community must do more to foreground LOAC. Pushing out targeted ads on social media urging compliance with the laws of war is the very least it can do.