On Sept. 4–5, in a potent demonstration of the efficacy of legal countermeasures founded on actionable cyber threat intelligence, the U.S. Department of Justice (DOJ) and Treasury Department unleashed a whirlwind of seizures, indictments, and sanctions targeting assets of Russia’s malign influence operations. This spate of legal action comes as the Kremlin seeks to not only undermine U.S. support for Ukraine, but also erode Americans’ trust in democratic institutions in the lead-up to the 2024 presidential election. Russia’s efforts to influence U.S. public opinion, which the Office of the Director of National Intelligence (ODNI) has identified as the primary threat to the U.S. elections, threaten the internal stability and national security of the United States by amplifying divisive rhetoric over contentious U.S. domestic issues that exacerbate political polarization.

The most striking aspect of the U.S. government’s response to Russia’s influence operations is the DOJ’s novel use of trademark law to seize 32 web domains that constituted part of Doppelganger, a Russian operation that tried to pass off its pro-Kremlin news outlets as legitimate Western media. The seizure of these domains — together with other actions from the DOJ and Treasury — thus stands as a salient proof that the skillful joint employment of cyber threat intelligence and legal action can effectively counter the Kremlin’s subversive threats, even as Russia continues to violate international law. There remain, however, a variety of challenges to comprehensively mitigating foreign malign influence operations, and lawmakers, government officials, and social media companies must heed these difficulties in order to render ongoing countermeasures effective in the long run.

Parallel Worlds: How Doppelganger Creates Alternative Political Realities

Spearheaded by the Kremlin as an alternative way to reach western audiences in the aftermath of the February 2022 invasion of Ukraine — which saw influential Russian state-owned media outlets such as RT cease televised broadcasts in the U.S. following bans by regulatory authorities in both the United Kingdom and European Union — Doppelganger utilizes a complex network of social media accounts, automated bots, paid advertisements, and fraudulent news websites to deceive readers into believing that its Kremlin-aligned content is in fact authentically produced by domestic Western media.

Since its inception in 2022, Doppelganger has evolved into a comprehensive operation that exploits divisive issues – including immigration, energy prices, government spending, and the war in Ukraine – to foment discord within and among the United States and NATO. In doing so, Doppelganger — and Russian malign influence operations more broadly — aims to undermine U.S. national security by degrading political cohesion within the United States and with its partners abroad.

Doppelganger operates through information laundering, a process in which Russian operatives seed content containing Kremlin-aligned narratives into public discourse by using obscure “burner” social media accounts to share links to articles published on illegitimate web domains. Such illegitimate or “cybersquatting” domains — whose appearance is often virtually indistinguishable from the legitimate domains they imitate — mimic a variety of legitimate U.S. and international media outlets from across the political spectrum. Such illegitimate domains include, for instance, washingtonpost[.]ltd and welt[.]media, spoofs of legitimate media outlets such as The Washington Post (washingtonpost.com) and Die Welt (welt.de).

Not only is this content consumed by ordinary social media users, but some of it is also picked up by large influencer accounts which — often ignorant of the content’s pedigree — repost it to their own feeds, thus creating a layering process that generates a semblance of authenticity while obfuscating the original source. By leveraging influencer accounts’ broader audience, Doppelganger is able to disseminate its content to a large swath of users without having to first invest in the time-consuming process of building up an authentic following.

Part of what enables Doppelganger to amplify its reach is its use of AI tools. In addition to using AI to generate images and video as well as write and post articles and comments, an OpenAI report from May revealed that Doppelganger has also employed its AI models to translate original content into multiple languages and convert articles into social media posts. The use of AI tools in influence operations is not unique to Russia — both Iran and China have utilized such assets to generate content as part of their respective Storm-2035 and Spamouflage operations aimed at influencing U.S. public opinion. In each case, AI tools have served to accelerate the pace of operations and further expand their scale by facilitating the rapid generation of content in multiple languages.

Unusual Measures for Unusual Beasts: Finding Effective Legal Mechanisms

Notwithstanding Russia’s employment of AI to expedite and broaden its content dissemination, the imitation of legitimate domains became Doppelganger’s Achilles’ heel. Because Doppelganger’s virtually identical twins of legitimate domains copied trademarked designs, the DOJ alleged that Doppelganger’s illegitimate spoofs violated the prohibition against the trafficking in counterfeit goods and services codified in 18 U.S.C. § 2320. Indeed, the use of trademark law to seize the domains is an innovative approach to countering foreign malign influence operations, and one whose legal ingenuity ought to serve as an inspirational basis for the resourceful wielding of conventional legal mechanisms as effective and actionable countermeasures against disinformation.

In addition to using trademark laws to seize various Doppelganger domains, the DOJ has also utilized anti-money laundering (AML) laws codified in 18 U.S.C. §1956 to counter Russia’s influence operations. In the recently released affidavit, the DOJ maintained that Doppelganger operatives had violated AML laws by purchasing domains from U.S. registries or registrars (domain name providers) on behalf of sanctioned individuals and entities. These sanctioned individuals and entities include First Deputy Chief of Staff of the Presidential Executive Office Sergei Vladilenovich Kiriyenko, Ilya Gambashidze, Nikolai Tupikin, Social Design Agency (SDA), and Structura National Technology.

Furthermore, in another indictment from Sept. 4, the DOJ accused two Russian nationals and RT employees — Kostiantyn Kalashnikov and Elena Afanasyeva — of not only violating the Foreign Agents Registration Act, but for also laundering $10 million through foreign shell companies to fund and manage a Tennessee-based content creation company — since confirmed to be Tenet Media — in order to advance Kremlin-aligned narratives. Similarly, on September 5 the DOJ unsealed yet another indictment accusing U.S. citizens Dimitri and Anastasia Simes — the former of whom served as a host and producer of the sanctioned Russian media outlet Channel One Russia — of laundering $1 million in payments through the (also sanctioned) Russian bank Rosbank in violation of the International Emergency Economic Powers Act.

What it All Means: How Cyber Intelligence Supports Legal Action 

The seizure of the Doppelganger domains evinces that existing legal mechanisms enacted in tandem with increased collaboration among federal law enforcement, the DOJ, and cyber threat intelligence investigators can constitute actionable and effective countermeasures against foreign malign influence operations. In the case of indictments, however, although “naming and shaming” might send a message to Russia or deter would-be hackers, it is unlikely to have significant impact so long as the indicted remain at large in countries not bound by any extradition treaty with the United States.

Alternatively, the Treasury’s Sept. 4 decision to sanction 10 persons and two entities — including Kalashnikov and Afanasyeva — involved in Russia’s broader influence operations stands as a more effective legal mechanism insofar as it directly limits the sanctioned persons’ and entities’ financial maneuverability. Continuing to counter foreign malign influence operations by targeting operatives’ financial channels is thus a prudent countermeasure that the Office of Foreign Assets Control should further pursue.

Additionally, while existing AML laws are an important mechanism for countering influence operations, given that social media remains a largely permissive space in which foreign actors can conduct their malign operations — whether used to crowdfund for terrorist organizations or to disseminate disinformation in general — lawmakers ought to consider increasing regulation of social media platforms’ content moderation policies. Although some analysts argue that foreign malign influence operations can be countered most effectively by targeting the sources of disinformation rather than the channels propagating them, seizing domains is only practical when they are registered to registrars in the United States or partner states. Similarly, social media companies such as Meta and TikTok have recently moved to ban Russian state media outlets such as RT and Rossiya Segodnya for engaging in what the former has called “foreign interference activity.”

While banning state media outlets engaged in foreign malign influence operations is an important step in stemming the flow of disinformation, it is only a partial solution. A report from the Institute for Strategic Dialogue demonstrated that, even after being banned in the European Union, content from Russian state media continued to reach European audiences, as Russian operatives utilized “mirror domains” – identical copies of banned domains that use different domain names – to bypass algorithms. Because a plethora of mirror domains remain active and evade bans targeting official state media, social media companies must take further action to ensure that content derived from banned sources – even when not directly emanating from them – is still blocked. Such mirror domains – and instances where social media accounts simply copy and paste content from Russian state media – might, for instance, be countered through the use of text-detection methods such as natural language processing.

Given reasonable concerns over freedom of speech, such regulation ought to be handled with care and transparency so as to ensure the necessary maintenance of civil liberties. Rather than any draconian curtailment of freedom of speech, such regulation might involve mandating that social media companies create, maintain, and adequately enforce content moderation policies in line with standards set forth by government officials crafted through consultation with representatives from civil society. As noted in the E.U.’s 2022 Digital Services Act, responsible internal behavior by social media companies is not a hindrance to freedom of expression, but rather fosters a “safe, predictable and trustworthy online environment” in which those rights can be exercised while mitigating against foreign actors’ efforts to pervert those very rights and thus undermine states’ national security and the integrity of their democratic institutions.

Furthermore, government officials would benefit from liaising with tech companies to develop more robust standards — rather than “best effort” approaches — for the regulation of AI content, placing transparency and reasonable disclosure requirements at the forefront. There are already a litany of bills proposing various regulatory requirements for AI, though in general, such regulations might mandate watermarking or otherwise labeling AI-generated content in concert with efforts to promote awareness of such content as part of broader media and digital literacy education initiatives. Finding constructive ways to regulate AI tools and content is particularly important since recent research suggests that search engines are increasingly promoting AI-generated content due to search engine optimization (SEO) algorithms’ preference for their simple and repetitive quality.

As a final recommendation, cyber threat intelligence investigators and legal teams should work together more closely to identify and seize illegitimate domains before social media networks can share their content. Given that Doppelganger domains are hosted on a variety of foreign servers, this necessitates increased interagency collaboration between foreign intelligence services. By deepening partnerships with private cybersecurity firms, social media and tech companies, non-profits, and foreign intelligence services, government agencies can further augment the resource pool at their disposal in countering disinformation. In total, leveraging existing laws, crafting new regulation to meet present challenges, enhancing industry partnerships, and bolstering ties between civil society and government constitute steps that collectively stand to facilitate the development of a more coordinated and adroit response force capable of effectively countering foreign malign influence operations.

IMAGE: U.S. Attorney General Merrick Garland (C) accompanied from left, Principal Deputy Assistant Attorney General Nicole Argentieri, head of the Justice Department’s Criminal Division, Deputy Attorney General Lisa Monaco, Garland, FBI Director Christopher Wray, and U.S. Assistant Attorney General for the National Security Division Matt Olsen, takes a question from a reporter during an Election Threats Task Force meeting at the Justice Department on September 4, 2024 in Washington, DC. (Photo by Andrew Harnik/Getty Images)