Midway through a year in which more than 2 billion voters in at least 64 counties are going to the polls, pioneers of artificial intelligence are breathing a sigh of relief and arguing that the worst fears over the potentially corrosive influence of AI on democracies seem to have been overblown. While platforms have removed scores of AI-distorted videos of politicians lying or making fools of themselves, the impact on voters and tallies has seemed minimal.

But in the midst of the first-ever round of AI-influenced elections globally, it’s important to guard against a false sense of security. The last two decades have witnessed drastic and irreversible political changes wrought by the internet and social media, most of which were unforeseen and took years or decades to fully manifest. As the world assesses AI’s effects on democracy, we need to settle in for the long haul, looking well beyond the most obvious and tangible near-term threats to elections.

Going into this avalanche of elections, tech platforms, politicians, and regulators had dire forecasts about how AI would enable foreign interference and supply garden-variety fraudsters with deepfakes, the highly realistic videos that are doctored or depict events that never took place. In February, Microsoft, Meta, OpenAI, and others pledged “reasonable precautions” to label manipulated content and share information about it.

Just two months later, Meta President Nick Clegg was already drawing conclusions. He remarked after balloting in Taiwan, Pakistan, and Bangladesh that “it is striking how little these tools have been used on a systematic basis to really try to subvert and disrupt the elections.” In June, Microsoft President Brad Smith similarly declared that while it was too soon to “declare victory,” Russian interference was more focused on the Olympics than the elections. Last month, tech journalist Louis Anslow opined about AI that “the death of democracy and truth is starting to seem greatly exaggerated,” calling it an “awkward anti-climax.”

Some of these tidy conclusions ring familiar. In 2004, the year Mark Zuckerberg launched Facebook, Pew Research concluded in a study that fears that the internet “might hurt healthy democratic deliberation are not borne out by online behavior.” Users were “not insulating themselves in information echo chambers” and the internet was judged to offer “heartening” potential for “stemming” polarization. In 2012, there were cheery reports that Facebook’s “I voted” button had driven a meaningful uptick in voter participation.

Experience with Rosy Assessments

Of course, those early measures of the impact of the internet and social media on democracy proved laughably rosy. We now know that digital transformation began reshaping democracy in ways that were hard to discern until they became all but irreversible. By vacuuming up print advertising, the internet wrought what has been described as a global “extinction event” for local news. Its decline, in turn, prompted a crisis of civic faith in communities around the world where citizens are often in the dark about the workings of local government and lack access to media outlets that can to hold public officials to account or get problems solved.

Not coincidentally, politics in the United States has grown steadily more fragmented, while political violence is spiking in the United States, France, Nigeria, India, and elsewhere and distrust in government institutions and the media is soaring. The world also is undergoing what analysts at the Carnegie Endowment for International Peace have dubbed a “democratic recession,” with fewer countries worldwide classified as liberal democracies, and more meeting the criteria of authoritarianism.

While social media is hardly the sole source of pressure on democracies, it has accelerated so-called “truth decay,” diminished emphasis on and faith in fact-based information. These ramifications of social media for democracy took tech CEOs, the media, and political analysts by surprise. In 2017, Zuckerberg essentially confessed to having left Facebook largely defenseless to Russian meddling. The 2018 Cambridge Analytica scandal sent global shockwaves by exposing how malign data-sweeping could enable potent and highly targeted political manipulation. Just this week, Zuckerberg voiced regret for having suppressed stories about Hunter Biden’s laptop on the eve of the 2020 election, implicitly recognizing that the decision to spike what turned out to be truthful reports fed perceptions of the platforms’ bias against Republicans. (Disclosure: I’m a member of Meta’s independent Oversight Board that serves as a check on content moderation on its platforms.)

The public as well as experts are all still struggling to understand the political universe that social media has wrought, with online influencers getting coveted speaking slots at political party conventions and viral memes defining political campaigns. As the globe reckons with how AI will shape democracy, it’s crucial to avoid premature self-congratulation and complacency.  That deepfakes haven’t yet deep-sixed an election should not be grounds for tech executives to rest easy. Instead, they should double down on imagining, tracking, and analyzing AI’s ramifications for democracy.

Risks of Distrust

One obvious point is that AI, like social media, risks accelerating the erosion of trust in institutions, authorities and the media. The alienation generated by mass texts and robocalls will compound, as more and more of what passes for communication is rendered entirely by machine. The distrust may spin into a vicious cycle whereby automated “grassroots” messages flood politicians’ offices, obscuring where actual constituents stand and further alienating political representatives from those they serve.

AI-based content systems will flex the power of algorithms to predict what we want to see, hear and believe, satisfying just those appetites. Such tunneling of information can feed suspicion of those with different backgrounds and identities, deplete empathy, and inflate grievance. Researchers say AI also risks reinforcing structural biases — if a bot is trained to target the most engaged voters with election information, for example, it may leave immigrant populations or linguistic minority groups permanently out of the loop.

The proliferation of AI-based content is likely to further erode the weight of credible, fact-based journalism, leading to more newsroom cutbacks. Why will media companies invest in creating  a 3,000-word, deeply reported news article if they can reach audiences at a tiny fraction of the cost using AI-generated derivatives of information put out by others.

Yet none of this means that AI spells doom for democracy. One of the biggest propellants of the global democratic recession has been a crisis of delivery: namely the failure of democratically elected governments to deliver economic growth, reduced poverty, better education, and other marks of a thriving society. These shortcomings, whether or not amplified by myths spread on social media, drive frustration with democracy and the embrace of purported autocratic saviors. As AI revolutionizes agriculture, manufacturing, supply chains, education, health care, emergency response, and more, governments can leverage these capabilities to improve delivery and reinforce the benefits of democratic governance.

Key Steps

No one knows for sure what AI holds in store for democracy. But we know enough to take key steps now to ensure it does not ride roughshod over norms and values.

First, regulators should force AI companies to provide transparency, allowing researchers to dig into how evolving capabilities are being used and their effects. Second, governments and companies need to install speed bumps so AI doesn’t proliferate so fast and far that no regulation or rules can catch up. Regulators in Europe have taken the lead in classifying categories of AI and slowing implementation of the most dangerous. Such preventive efforts should be applied to address not just environmental, health, and security concerns but also repercussions for democracy.

Whereas the United States is for now relying on an executive order relying mostly on voluntary compliance by companies, the European Union — through an AI Act that went into force earlier this month — recognizes that AI behemoths chasing mammoth profits will not be slowed by soft commitments. It has issued binding regulations backed up by intrusive oversight and enforcement measures, providing a potential blueprint for other jurisdictions. By setting standards for such a large market, the EU’s approach will reverberate globally, as companies configure their operations to comply and find it easier to implement consistent policies even in places where regulation is far behind.

A similar effect may gradually take hold across the United States, as individual states, including Colorado and California, take their own initiative to regulate AI. As regulatory oversight bodies and enforcement agencies get up to speed in implementing both the U.S. measures and the EU AI Act, they should be vigilant for evolving threats to democracy, including those that may be less obvious or direct.

A third key step involves revenue models. It took years for the public and policymakers to grasp how misleading, incendiary, and vitriolic content sent online engagement and revenues skyrocketing. AI business models are only now being invented and refined; if they depend upon eyeballs or ad dollars, history may repeat itself. Revenue structures that favor democracy-eroding content need to be identified and disabled before they become entrenched.

When social media first surfaced, users and even those who weren’t on the platforms were swept up in a roiling tide that washed away crucial underpinnings of democracy, including local news and trusted institutions. Before diving in deeper on AI, it’s important for everyone to think through what it will take to keep democracy above water.

IMAGE: Leadership Conference on Civil and Human Rights President and CEO Maya Wiley (L) and Meta CEO Mark Zuckerberg attend the “AI Insight Forum” on Capitol Hill on September 13, 2023 in Washington, DC. Lawmakers are seeking input from business leaders in the artificial intelligence sector and some of their most ardent opponents, in preparation for writing legislation governing the rapidly evolving technology. (Photo by Chip Somodevilla/Getty Images)