Editor’s Note: This article represents the next installment of our Symposium on AI and Human Rights, co-hosted with the Promise Institute for Human Rights at UCLA School of Law.

In May 2023, an AI-generated image depicting an explosion at the Pentagon went viral. Images of the building encased in dark smoke swept across social media, gaining such traction that the stock market dipped, with the S&P 500 dropping briefly as the public processed the possibility of an attack on the Department of Defense. 

Why was the deepfake of the Pentagon explosion so convincing? Rapid advances in generative AI certainly played a part, increasing the quality of the images.  But the deepfake also gained traction because some of the images circulating on social media were shared by a verified account for “Bloomberg Feed.” The images, however,  were not  posted by Bloomberg News at all, but rather a verified account purchased in imitation of its name.

As we collectively grapple with how to manage the onslaught of misinformation associated with advances in generative AI, separating truth from fiction is increasingly difficult. An array of innovative solutions are being explored to bolster online users’ understanding of content authenticity and provenance, such as through digital watermarking initiatives and visible labels on AI-generated content based on metadata tags. But these initiatives come with important privacy trade-offs. As we enter a more complex and dangerous  information environment, one of the best signals we’ll have about the trustworthiness of online content is not just what is posted, but the credibility and authenticity of the person posting it.

Authenticity For Sale

One of the main markers of authenticity on social media is the coveted “verified account” status. Historically, this has been an indicator such as a blue check mark or similar symbol granted to accounts when platforms confirm that they are who they say they are. But for many social media platforms, the policies underlying account verification have been degraded at a time when their integrity is paramount – as we enter the era of generative AI. 

As has been widely covered, X (formerly known as Twitter) dramatically modified its verification policies under the leadership of Elon Musk (I previously worked as human rights counsel for Twitter before the takeover). Musk made headlines when he decided that, rather than giving the status to notable public figures, verification would be open to anyone for a monthly fee. X recently extended verified status to certain other “influential” users, but the symbol has been so degraded that many no longer even want it

On Facebook and Instagram, verification is – somewhat confusingly – both available for sale and provided for free to certain notable accounts, with identical status symbols afforded under both regimes. While Meta’s verification scheme is tied to proof of a user’s authentic identity, it is nevertheless monetized at a cost that may render it unaffordable for some who deserve it. 

Incredibly, LinkedIn verification is actually tied to your consent to provide its partners, such as CLEAR, with your biometric information. In LinkedIn’s case, verification is indeed up for sale, but the price is not in dollars, but rather in access to your biometric data. 

To be clear: verification is not just a status symbol. On some social media platforms, verification policies do not only affect how content is perceived, but also how it is amplified across the platform. X, for example, has been open about the fact that content from users who pay for verification will be amplified over that of other users with priority in replies, mentions, and searches. At a time when finding the signal in the AI-generated noise is about to be harder than ever, digital authenticity and amplification should not be for sale. 

Verification as a Public Good

The core problem is that verification doesn’t just affect the person or account posting on social media: it impacts everyone who depends upon the information they share. Consider posts by journalists in Kyiv sharing air raid alerts in a war zone, the Supreme Court sharing the outcome of a judicial decision, or aid workers circulating information in the wake of an earthquake or other natural disaster. In these situations, access to credible, authoritative information is at least as important as the rights and interests of those posting the information in the first place. 

The social media accounts for each of these individuals and organizations need to be verified for the sake of users — not the account-holders — and the matter shouldn’t be left up to whether the account-holder decides to (or has the ability to pay for) the privilege. Think of verification as a “public good” in economics: everyone benefits from having key voices receive a marker of authenticity, and it’s hard to measure how much we all benefit from knowing who’s who. 

The Consequences of Pay-to-Verify Systems

What are the risks of the current situation? As with the case of the Pentagon deepfake, misguided policies on account verification expose social media users to increased risks of being misled by verified accounts impersonating respected entities or individuals. Plenty of people have already flagged the risks of impersonation as a result of Musk’s verification regime, but this was mostly in the context of egregious (and yes, sometimes hilarious) brand impersonation. But the impersonation of human rights defenders and organizations has potentially deadly consequences for both those being impersonated and those relying on what they say. 

Imagine if a Russian-aligned actor impersonates the International Committee of the Red Cross in Ukraine, then shares false information about humanitarian corridors (routes through which refugees can evacuate safely from a war zone). Imagine if a prominent human rights activist in Saudi Arabia — a country where homosexuality is punishable by death — is impersonated by an account purporting to come out as gay. Imagine someone impersonating the World Food Programme in Mali “admitting” to colluding with armed groups – imperiling the entire organization’s staff in one of the world’s deadliest countries for aid workers. 

Or don’t imagine at all, but consider that — amidst mass violence in Sudan — a verified account impersonating a paramilitary group known as the Rapid Support Forces falsely claimed that its leader, Mohamed Hamdan Dagalo (also known as Hemedti), died in combat. The impact of mis- and disinformation is greater when it comes from a verified account, and the harm can be irreparable.

Amplifying Dangerous Voices

How else could this go sideways? Well, let’s think about who gladly would pay for verification. Dangerous individuals and organizations who have previously been denied verification are arguably those that stand to benefit most from a pay-to-play system. 

The Taliban, for example, leapt at the opportunity to obtain verification. Incredibly, Afghanistan was one of the places Musk’s verification changes were rolled out. This suggests that, at least temporarily, X was amplifying the speech of verified Taliban accounts over those of local human rights activists that couldn’t afford to pay for the privilege (as roughly 91% of Afghani household income is spent on food). 

Dynamics will be similar in authoritarian and unstable democracies regimes around the world. The voices of brave activists risk being drowned out by regime-aligned accounts with a vested interest in spreading disinformation and the resources to afford to amplify their reach through verification. In stable democracies, too, particularly during consequential moments around elections and events that precipitate political turmoil, an ill-informed verification process risks fundamentally distorting the information ecosystem in ways that are at once obvious and difficult to predict. 

***

The coming wave of AI-generated misinformation comes amidst a “human rights recession” in the tech industry: a retraction of commitments and investments in teams and functions focused on protecting user trust and safety as well as those responsible for thinking through the unintended impacts of platform changes on vulnerable and marginalized communities. Civil society, government, and journalists play critical roles but cannot address this problem alone. We also need voices inside of technology companies – people who understand the intricacies of often-opaque technology products and policies – to advocate for the rights of those that are disproportionately impacted by decisions taken about our digital civic spaces. We especially need internal advocates with expertise in international humanitarian and human rights law frameworks and an understanding of conflict dynamics to consider how changes to platform products and policies might expose the most at-risk communities on earth to harm.  

Preventing the spread of AI-generated misinformation and verification is inextricably linked to the ability to identify credible sources of information. While important in any context,  it is particularly essential for those in the Global Majority that depend upon social media for critical information – in war zones, during natural disasters and elections, and on a range of issues fundamental to their safety and human rights.   

As Musk has said repeatedly, we get what we pay for. But, in an era of AI-generated misinformation, let’s be very clear about the price we stand to pay. 

IMAGE: A photo illustration depicts numerous deep faked videos of TV personalities that had been used to sell spurious medical products online. (Photo by STEFANI REYNOLDS/AFP via Getty Images)