Voters in over 50 countries will go to the polls in 2024. These elections will be some of the first in which generative artificial intelligence (AI) can be used to create disinformation, including AI deep fakes of candidates or elected officials. Tech companies will need to learn how to combat the misuse of AI tools on their platforms at speed and across global markets.

Social media platforms and AI companies are making public commitments to combat the use of AI to spread mis- and disinformation in elections through policies aimed at greater transparency, collaboration with civil society and academia, and media literacy. These policies also suffer from various shortcomings, however, including vague and overly broad guidance that may be difficult to enforce. To be effective, tech companies will have to quickly adapt and work with governments and civil society around the world in the lead up to elections in which more than half of the world’s population is expected to vote.

Common themes among tech commitments on AI in elections

1. Engaging with civil society and non-profit organizations

Signatories of the February 2024 Munich Security Conference tech accord committed to partnering with academia, civil society, and non-profit organizations in addressing the “deceptive use” of AI in elections.

Companies separately announced collaborations with independent groups. Google said it would partner with Democracy Works, a non-profit emphasizing voter accessibility, to feature accurate information from state and local election offices at the top of search results.

TikTok has also published plans to launch a dedicated U.S. Elections Center, in partnership with the same non-profit, to provide its American users with authoritative voting information and elections results.

Similarly, Microsoft announced a collaboration with Reporters Without Borders and other news organizations. This initiative aims to have its search engine, Bing, direct voters to reputable sources on elections and voting information.

2. Increasing transparency about AI usage

The Munich accord’s signatories also committed to being transparent in how they plan to confront deceptive AI election content, such as by publicly announcing their counter-disinformation policies.

TikTok’s Transparency Center, for example, said it would introduce “covert influence operations reports” to share with the tech industry in an effort to detect deceptive AI practices.

Along similar lines, Google emphasized that its Threat Analysis Group would continue monitoring and tackling “coordinated influence operations” during the election cycle, subsequently publishing its findings to inform the public and private sector. 

3. Identifying AI content through labels and metadata

The Munich accord also recognized provenance, or the record describing data’s origins, as critical in mitigating risks from deceptive uses of AI in elections. Signatories committed to identifying realistic, AI-generated content and developing “robust provenance methods,” including watermarking and signed metadata.

Tech companies have published their own policies regarding how they will attach provenance signals to content on their platforms. OpenAI, the developer of ChatGPT, announced plans to encode content provenance with cryptography in DALL-E, its image generation model, to “empower voters to assess an image with trust and confidence” in how it was produced.

Google, Microsoft, Meta and TikTok all said they will implement labeling — either visible or invisible, with digital watermarking that encodes details in metadata to pinpoint a content’s origin — to ensure that voters can accurately identify whether an image or video is AI-generated.

Google and Meta will also require election advertisers to disclose when their ads contain photorealistic content digitally created or altered by AI, with penalties for policy violations. 

4. Supporting efforts to develop the public’s media literacy

The Munich accord identified public awareness as one of its key goals in addressing AI disinformation ahead of elections. Tech companies agreed to educate the public about the risks of AI, as well as support programs that “build whole-of-society resilience” against deceptive uses of AI in elections.

Independently, TikTok has named media literacy as a key part of its “counter-misinformation strategy” during elections. The company said it will partner with experts and organizations to teach its users how to identify AI-manipulated content and safely navigate social media posts about elections.

What’s Missing

But the Munich accord and policies announced by individual tech companies are often vague, making it difficult to hold companies accountable. The Munich accord, for example, committed companies to taking “reasonable precautions” to prevent the generation of deceptive election content. However, the accord doesn’t define “reasonable,” giving companies considerable discretion on what activities to permit or ban.

Definitional issues are a common problem in this space. Problematic content that would trigger action is variously defined as “convincing,” “deliberate,” “intended for public consumption,” and “realistic,” and the necessary response is described as “swift,” “proportionate,” and “appropriate.” But there is little agreement on what these terms mean in practice. To more effectively regulate the use of AI, companies should work with each other, government, and civil society to define these terms.

There are also potential gaps in the specific AI-generated content companies seek to combat. Some policies list content that impersonates public figures, such as candidates or elected officials, or that gives misleading information about where, when, or how to vote. (Accurate information on voting logistics is perhaps already difficult to come by through AI chatbots.) The policies usually do not mention using AI to create misinformation about a candidate or elected official. This kind of information could still influence an election, even if it does not directly impersonate a real person.

Tech company policies also do not specify how their AI content rules will be enforced — whether through machine learning tools that detect AI-generated content, other computer models, or staff reviewing content policy violations. The last option may be less likely in the face of layoffs that recently hit social media trust and safety teams, perhaps calling into question companies’ ability to fight information manipulation on their platforms. Content moderation, especially around elections, will also have to face the huge variety of language and culture in the numerous elections in 2024.

Tracking Announced AI Commitments from Tech Companies

A Tech Accord to Combat Deceptive Use of AI in 2024 Elections 
Munich Security Conference
February 16, 2024

  • Signed by Amazon, Google, IBM, Microsoft, Meta, OpenAI, Stability AI, TikTok, Trend Micro, Truepic, and X
  • Defines Deceptive AI Election Content as “convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.”
  • Aims to “set expectations for how signatories will manage the risks arising from Deceptive AI Election Content”
  • Aims to advance seven goals concerning AI:
    • Prevention: Researching, investing in,  and deploying “reasonable precautions” to curtail risks of “deliberately Deceptive AI Election Content” being generated
    • Provenance: “Attaching provenance signals to identify the origin of content where appropriate”
    • Detection: “Attempting to detect Deceptive AI Election Content or authenticated content”
    • Responsive Protection: “Providing swift and proportionate responses to incidents involving the creation and dissemination of Deceptive AI Election Content”
    • Evaluation: “Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with Deceptive AI Election Content”
    • Public Awareness: “Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Election Content, and ways citizens can protect themselves from being manipulated or deceived by this content”
    • Resilience: “Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open-source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of Deceptive AI Election Content”
  • Signatories committed to the following steps through 2024:
    • Developing and implementing technology to mitigate risks related to Deceptive AI Election content”
    • “Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content”
    • “Seeking to detect the distribution of Deceptive AI election content hosted on our online distribution platforms where such content is intended for public distribution and could be mistaken as real”
    • “Seeking to appropriately address Deceptive AI Election Content we detect that is hosted on our online distribution platforms and intended for public distribution, in a manner consistent with principles of free expression and safety
    • “Fostering cross-industry resilience to Deceptive AI Election Content by sharing best practices and exploring pathways to share best-in-class tools and/or technical signals about Deceptive AI Election Content”
    • “Providing transparency to the public regarding how we address Deceptive AI Election Content — for instance, by publishing the policies that explain how we will address such content”
    • “Continuing to engage with a diverse set of global civil society organizations, academics, and other relevant subject matter experts through established channels or events, in order to inform the companies’ understanding of the global risk landscape as part of the independent development of their technologies, tools, and initiatives described in this accord”
    • “Supporting efforts to foster public awareness and all-of-society resilience regarding Deceptive AI Election Content—for instance by means of education campaigns regarding the risks created for the public and ways citizens can learn about these risks to better protect themselves from being manipulated or deceived by this content”

Planning for the 2024 Elections
Snapchat
January 23, 2024

  • Snapchat published their plan to monitor developments for the upcoming elections,reconvening their election integrity team, including misinformation, political advertising, and cybersecurity experts:
    • Designed to Prevent the Spread of Misinformation: “Our founders designed Snapchat to be very different from other social media platforms. Snapchat doesn’t open to a feed of endless, unvetted content, and it doesn’t allow people to live stream. We don’t program our algorithms to favor misinformation, and we don’t recommend Groups. Instead, we moderate content before it can be amplified to a large audience, and we feature news from trusted media partners around the world”
      • “Our Community Guidelines, which apply equally to all Snapchat accounts, have always prohibited the spread of misinformation and purposefully misleading content, like deepfakes — including content that undermines the integrity of elections”
      • “We prohibit spreading false information that causes harm or is malicious, such as denying the existence of tragic events, unsubstantiated medical claims, undermining the integrity of civic processes, or manipulating content for false or misleading purposes (whether through generative AI or through deceptive editing)”
    • Additional Safeguards for Political Advertising: “We have also taken a unique approach to political ads, protecting against election interference and misinformation. We use human review on every political ad and work with an independent, non-partisan fact-checking organization to make sure they meet our standards for transparency and accuracy. Our vetting process includes a thorough check for any misleading use of AI to create deceptive images or content”

Protecting election integrity in 2024
TikTok
January 18, 2024

  • TikTok shared efforts it will take during the 2024 elections to ensure that it “continues to be a creative, safe, and civil place”:
    • Connecting people to trusted information: “In the coming days, we will launch our US Elections Center, in partnership with nonprofit Democracy Works. The Center will provide our 150M+ US community members with reliable voting information for all 50 states and Washington, DC. We will direct people to the Elections Center through prompts on relevant election content and searches. We’ll continue to add information throughout the year, including election results.”
      • “Throughout 2024, we’ll continue to partner with experts and fact-checking organizations around the world to deliver engaging media literacy campaigns about misinformation, identifying AI-generated content, and more”
    • Moderating content and accounts during elections: “We invest in media literacy as a counter-misinformation strategy as well as technology and people to fight misinformation at scale. This includes specialized misinformation moderators with enhanced tools and training, and teams on the ground who partner with experts to prioritize local context and nuance.”
      • “In the coming months, we’ll introduce dedicated covert influence operations reports to further increase transparency, accountability, and sharing with the industry.”
      • “We don’t allow manipulated content that could be misleading, including AIGC of public figures if it depicts them endorsing a political view. We also require creators to label any realistic AIGC and launched a first-of-its-kind tool to help people do this. As the technology evolves in 2024, we’ll continue to improve our policies and detection while partnering with experts on media literacy content that helps our community navigate AI responsibly.”

How OpenAI is approaching 2024 worldwide elections
OpenAI
January 15, 2024

  • OpenAI has announced three “key initiatives” it is investing in ahead of 2024 elections:
    • Preventing Abuse: “We work to anticipate and prevent relevant abuse — such as misleading “deepfakes”, scaled influence operations, or chatbots impersonating candidates
      • “We don’t allow people to build applications for political campaigning and lobbying”
      • “We don’t allow builders to create chatbots that pretend to be real people (e.g., candidates) or institutions (e.g., local government)”
      • “We don’t allow applications that deter people from participation in democratic processes—for example, misrepresenting voting processes and qualifications (e.g., when, where, or who is eligible to vote) or that discourage voting (e.g., claiming a vote is meaningless)”
    • Transparency around AI-generated content: “Better transparency around image provenance — including the ability to detect which tools were used to produce an image — can empower voters to assess an image with trust and confidence in how it was made”
      • “Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials — an approach that encodes details about the content’s provenance using cryptography — for images generated by DALL·E 3”
      • “We are also experimenting with a provenance classifier, a new tool for detecting images generated by DALL·E”
    • Improving access to authoritative voting information: “ChatGPT will direct users to CanIVote.org, the authoritative website on US voting information, when asked certain procedural election related questions — for example, where to vote. Lessons from this work will inform our approach in other countries and regions”

Meta/Facebook and Labeling AI In Political or Social Issue Ads
Meta
January 3, 2024

  • Disclosure policy for ads about social issues/elections/politics: advertisers will be required to disclose when their ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered by AI or other methods to depict the following scenarios:
    • “Depict a real person as saying or doing something they did not say or do; or
    • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
    • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”
  • Meta may penalize the advertiser if they repeatedly fail to disclose.
  • Meta’s independent fact-checking partners can rate content as “Altered” if it was created or edited in misleading ways (including through the use of AI).

How we’re approaching the 2024 U.S. elections
Google
December 19, 2023

  • Google announced steps it would take to combat “fresh challenges” impacting the misinformation landscape, with an increased focus on AI:
    • “Beginning early next year, in preparation for the 2024 elections and out of an abundance of caution on such an important topic, we’ll restrict the types of election-related queries for which Bard and SGE will return responses.”
    • Helping people identify AI-generated content:
      • Ads disclosures: “We were the first tech company to require election advertisers to prominently disclose when their ads include realistic synthetic content that’s been digitally altered or generated, including by AI tools”
      • Content labels: “Over the coming months, YouTube will require creators to disclose when they’ve created realistic altered or synthetic content, and will display a label that indicates for people when the content they’re watching is synthetic”
      • “‘About this image in Search’ helps people assess the credibility and context of images found online”
      • Digital watermarking: “SynthID, a tool in beta from Google DeepMind, directly embeds a digital watermark into AI-generated images and audio”
    • Surfacing reliable information to voters:
      • Search: “We’ll continue to work with partners like Democracy Works to surface authoritative information from state and local election offices at the top of Search results when people search for topics like how and where to vote. And as with previous U.S. elections, we’re working with The Associated Press to present authoritative election results on Google”
      • News: “In 2022, we launched additional News features to help readers discover authoritative local and regional news from different states about elections around the country”
      • YouTube: “YouTube will work to ensure the right measures are in place to connect people to high-quality election news and information”
      • Maps: “We’ll clearly highlight polling locations and provide easy to use directions. To prevent bad actors from spamming election-related places on Maps, we’ll apply enhanced protections for contributed content on places like government office buildings”
      • Ads: “We’ve long required advertisers who wish to run election ads (federal and state) to go through an identity verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads also appear in our Political Advertising Transparency Report.”
    • Partnering with organizations to provide campaigns with security:
      • “Our Threat Analysis Group (TAG) and the team at Mandiant Intelligence help identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. For example, on any given day, TAG is tracking more than 270 targeted or government-backed attacker groups from more than 50 countries. We publish our respective findings consistently to keep the public and private sector vigilant and well informed.”

Microsoft announces new steps to help protect elections
Microsoft
November 7, 2023

  • Microsoft announced 5 new steps to protect electoral processes in the U.S. and other countries where critical elections will be taking place in 2024.
    • These steps are grounded in a set of principles grounded in principles of:
      • “Voters have a right to transparent and authoritative information regarding elections
      • “Candidates should be able to assert when content originates from their campaign and have recourse when their likeness or content is distorted by AI for the purpose of deceiving the public during the course of an election.”
      • “Political campaigns should protect themselves from cyber threats and be able to navigate AI with access to affordable and easily deployed tools, trainings, and support.”
      • “Election authorities should be able to ensure a secure and resilient election process and have access to tools and services that enable this process”
    • The five steps of Microsoft’s Election Protection Commitments are:
      • “Launching Content Credentials as a Service. This new tool enables users to digitally sign and authenticate media using the Coalition for Content Provenance and Authenticity’s (C2PA) digital watermarking credentials, a set of metadata that encode details about the content’s provenance using cryptography. Users can attach Content Credentials to their images or videos to show how, when, and by whom the content was created or edited, including if it was generated by AI”
      • “Help[ing] political campaigns navigate cybersecurity challenges and the new world of AI by deploying a newly formed “Campaign Success Team” within Microsoft Philanthropies’ Tech for Social Impact organization. This team will advise and support campaigns as they navigate the world of AI, combat the spread of cyber influence campaigns, and protect the authenticity of their own content and images”
      • “Creat[ing] and provid[ing] access to a new ‘Election Communications Hub’ to support democratic governments around the world as they build secure and resilient election processes. This hub will provide election authorities with access to Microsoft security and support teams in the days and weeks leading up to their election, allowing them to reach out and get swift support if they run into any major security challenges”
      • Using the company’s voice to “support legislative and legal changes that will add to the protection of campaigns and electoral processes from deepfakes and other harmful uses of new technologies. We’re starting today by endorsing in the United States the bi-partisan bill “Protect Elections from Deceptive AI Act” introduced by Senators Klobuchar, Collins, Hawley, and Coons”
      • “Empower[ing] voters with authoritative election information on Bing. We will do this in partnership with organizations that provide information on authoritative sources, ensuring that queries about election administration will surface reputable sites. Bing will join forces with the National Association of State Election Directors (NASED), leading Spanish news agency EFE, and Reporters Without Borders to proactively promote trusted sources of news around the world.”

How Meta is Planning for Elections in 2024
Meta
November 28, 2023

  • Meta published a statement concerning global election integrity ahead of 2024:
    • “Starting in the new year, advertisers will also have to disclose when they use AI or other digital techniques to create or alter a political or social issue ad in certain cases. This applies if the ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do. It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event”
    • “We continually review and update our election-related policies, and take action if content violates our Community Standards, including our policies on election and voter interference, hate speech, coordinating harm and publicizing crime, and bullying and harassment. We remove this content whether it was created by a person or AI”
    • “We’re also investing in proactive threat detection and have expanded our policies to help address harassment against election officials and poll workers”
IMAGE: (L to R) A poll worker checks in a voter on March 19, 2024 at the Noor Islamic Cultural Center in Columbus, Ohio (Photo by Andrew Spear/Getty Images); visual representation of artificial intelligence (via Getty Images); the logo of US online social media and social networking site ‘X’ (formerly known as Twitter) is displayed centrally on a smartphone screen alongside that of Threads (L) and Instagram (R) (Photo by Matt Cardy/Getty Images).