This weekend, hundreds of global leaders will convene at the Munich Security Conference to debate a wide range of pressing global challenges, from defense policy to sustainability. Emerging technology will rank high on the agenda, especially after the rise of China’s DeepSeek upended markets, spurring debate about U.S. and E.U. competition with China. But amid that flurry of activity, one crucial issue is at risk of falling by the wayside: protecting democracy from harmful abuses of AI. 

At last year’s Munich Security Conference, Microsoft, Meta, Google, OpenAI, and 23 other major technology companies unveiled an innovative AI Elections Accord. The agreement set forth voluntary actions the companies would undertake to address “deceptive AI election content” in the numerous elections held in 2024. Those actions included steps to limit the risks of their tools being misused to deliberately create deceptive AI election content, investment in provenance signals to help identify AI-generated content, improvements in detection and incident response, and information sharing across sectors. 

The Accord’s one-year term is ending, but these common-sense efforts must continue. While AI’s impact on last year’s elections was not as big as some had feared, the busy election season saw deepfake videos of politicians professing support for the opposing party or candidates in foreign elections, hundreds of unreliable AI-generated news sites, and confusion over whether audio recordings of politicians were authentic. Foreign adversaries incorporated generative AI into their interference efforts: the U.S. Intelligence Community identified the use of generative AI in Russia and Iran’s persistent attempts to interfere in the U.S. election, and OpenAI reported that Chinese and other foreign state-affiliated actors used its services for influence operations targeted at the United States. The Romanian Constitutional Court even cited the extensive use of AI when it annulled the presidential election in response to evidence of Russian interference. Now, as DeepSeek and other breakthroughs make AI tools ever more accessible and available to adversaries who want to manipulate public discourse, companies should continue to invest in trust and safety. This work is also crucial for addressing manipulation threats outside of elections, such as the growing misuse of generative AI for scams and cyberattacks

Another Year of Elections

Anyone who needs to be reminded of the continued urgency of this work need only glance at the schedule of upcoming elections. Just one week after the Munich Security Conference concludes, Germany will hold a pivotal federal election. As the largest economy in Europe and key member of a G-7 that is already experiencing significant political change, Germany’s election results will have major geopolitical consequences. 

More high profile elections will be held throughout 2025. Canada will replace nine-year leader Justin Trudeau; Australia and Japan will vote in elections that are set to shape the future of the Asia-Pacific region after the tumultuous ousting of South Korea’s president. A slew of elections across Europe have consequences for each country as well as the future of the European Union.

At the same time, threat actors will surely continue experimenting with AI tools as they strive to disrupt elections. German intelligence services’ warning that foreign governments would likely try to interfere with the February 2025 election has already been confirmed: Independent researchers recently uncovered a network of websites featuring AI-generated disinformation that were part of a Russian-linked election interference campaign.

The Next Phase of Tech Policy

The 2024 AI Elections Accord was a welcome recognition that AI could make foreign interference and other disruptions to the electoral environment easier, cheaper, and more effective. The commitments were an admirable start, but they had many shortcomings, including an overall lack of concrete benchmarks for progress. 

Now, the end of the Accord’s time frame presents an opportunity to create a more sustainable, long-term approach to navigating the risks and opportunities that AI poses to democracy. There are five main areas where companies can take this step:

First, companies should commit to consistent and well-resourced staffing of their trust and safety teams. In the past, companies have reduced trust and safety staff and implemented time-limited election integrity measures  (such as ads pause periods or ‘break the glass’ measures) that have missed critical points of the election process that fell outside of the policies’ timeframe. Consistent oversight is especially important as companies introduce more automated interventions and enforcement. Mainstreaming election integrity into normal operations would ensure adequate coverage throughout the election cycle. 

Second, companies, especially generative AI developers, should make concrete commitments to transparency. Those in the Accord were weak compared with the Santa Clara Principles, an industry-standard set of transparency and accountability practices for social media companies. While the Accord provided some general examples of transparency, the Santa Clara Principles details the types of information that companies should disclose, down to numbers on policy enforcement and the accuracy rates of automated decision-making processes. AI companies should emulate the ambition of the Santa Clara Principles, and require disclosure of what their election integrity policies are, how their policies are enforced and tested, and when they are being applied.

Third, companies should invest in robust testing of their products and interventions. Research by the Center for Democracy & Technology and others has shown the need for this, including evidence that leading chatbots produce incorrect information over a third of the time when asked election-related questions. Proactive testing and interventions help AI companies ensure that their tools perform as intended and produce accurate information — which is what makes the tools safe and useful for customers. 

The fourth and closely related step is for companies to provide independent researchers with better access to data about how their products are used and their policies are enforced in practice. The recent trend in limiting researcher access to data, including Meta’s shutdown of CrowdTangle and Twitter (now X) making data access prohibitively expensive, undercuts companies’ own interests in developing quality services. With access to data, researchers and non-profits can help validate and stress-test emerging technology and provide more constructive feedback to companies. 

Finally, continued collaboration is important to the success of all of these areas. Companies ought to share best practices, ensure interoperable technology, and share information about digital threats. They should proactively and regularly seek input from civil society and experts, who are often best-placed to identify risks and needs in the countries where they operate. Companies should develop formal channels for input, such as safety advisory councils that provide accountability and feedback about company policy, and engage with multi-stakeholder groups, following the models of the Christchurch Call Advisory Network and the Freedom Online Coalition Advisory Network

Investing in consistent policy and enforcement will save companies from losing ground between electoral periods, and improve public trust in technology companies’ capacity to handle adversarial threats. Just as elections reflect a multi-year social and political environment, rather than a single day, companies’ election-related policies should be integrated into a greater commitment to protecting democracy in countries around the world—whether it is an election year or not.

IMAGE: Voter Bots And Human Voters as an AI Bot voting and robots Election votes as Artificial intelligence using autonomous voter technologies disrupting elections as computer democracy as a robot vote casting ballots. (via Getty Images)