Editor’s Note: This article is the next installment of our Symposium on AI and Human Rights, co-hosted with the Promise Institute for Human Rights at UCLA School of Law.

From “AI will save us” to the AI apocalypse, recent narratives surrounding AI safety and regulation have prompted the question of who can, and should, have a seat at the table. The interest in generative AI technologies and large language models (LLMs) has shifted the debate from real-world harm to so-called “existential risk,” mirroring the tech industry’s narrative about AI’s “existential threat” to humanity. In May, a Science paper underscored the extreme risks and shortcomings of emerging governance initiatives. A 2023 open letter called on all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Ironically, many signatories of the letter are the architects of these AI technologies and continue to develop and invest in them.

But this debate on AI risk neglects to include the perspectives and experiences of those who will be most impacted by AI systems, especially marginalized groups, racialized persons, women, non-binary persons, LGBTQIA+, migrants and refugees, disabled persons, children, the elderly, and those of lower socioeconomic status, among others. When we talk about AI risk, we must also consider the broader societal, environmental, and human rights harms of AI, including the centralization of power in the hands of a few corporations

In 1990, Chellis Glendenning published her “Notes toward a Neo-Luddite Manifesto,” in which she applauded the Luddites for taking laissez-faire capitalism to task for enabling the “increasing amalgamation of power, resources, and wealth, rationalized by its emphasis on ‘progress’.” We argue that a tech-agnostic and neo-Luddite approach is paramount for challenging the power accumulated by the architects of AI.

Reclaiming Narratives Around Neglected Perspectives

As human rights advocates, we saw AI systems as a potentially harmful emerging technology beginning in the mid-2010s with, for example, the expansion of algorithmic-driven risk assessment tools for criminal justice and facial recognition. Many of us have focused our work on investigating, exposing, and preventing the harms of AI — including through working on full prohibitions on some tools, such as AI-driven surveillance techniques like facial recognition, which disproportionately affect marginalized groups. We’ve also repeatedly pushed back against AI hype and “techno-solutionism” – the belief that technology can solve any social, political and economic problem.

Fast forward to today: we look at narratives around AI safety and regulation with the goal of reclaiming them in ways that imagine a future in which social justice and human rights are prioritized instead of technical objectives. Together with civil society and affected communities, we see the need to craft futures that consider what spaces, captured by overblown AI discourse, must be detached from dominant AI narratives altogether. 

For example, can we have climate justice given the enormous amounts of energy that large language models (LLMs) consume? Can public interest-driven technology companies survive the capture by a handful of mega corporations in the United States, Western Europe, and China that monopolize computing power, AI chips, and infrastructure? 

There’s nothing fundamentally inevitable about the industrial relations of power underpinning AI today. This is also why scholars, activists, and practitioners, such as Joana Varon, Sasha Costanza-Chock, and Timnit Gebru have called for a move towards a federated AI common ecosystem, which is: 

“characterized by community and public control of consensual data; decentralized, local, and federated development of small, task-specific AI models; worker cooperatives for appropriately compensated and dignified data labeling and content moderation work, and ongoing attention to minimizing the ecological footprint and the social-economic-environmental harms of AI systems.”

Political imagination is required to arrive at alternative humane futures; centering human rights is a part of safeguarding our ability to define such futures on our terms. 

AI’s Erosion of Human Rights

Until recently, AI governance efforts have centered around the harms that civil society and experts have documented over the past few years, including increased transparency and accountability as well as calls for bans or moratoria of algorithmic biometric surveillance. Examples of such real-world harms range from the use of AI tools within social welfare that have threatened people’s right to essential public services, to facial recognition technology in public locations with resulting restrictions and violations to human rights and civic space. Today, Israel is reportedly using this technology and AI-enabled systems for target identification in Gaza, contributing to the accelerated pace of killing. 

Recent legislative initiatives (e.g., the EU AI Act, the Council of Europe Convention on AI, the U.S. Executive Order on AI), and the G7 Hiroshima AI process) either lack teeth, or have over-broad exemptions for the very uses that generate the most egregious harms. These carve-outs are driven in part by national security concerns and a sense of inevitability about widespread AI use, what some scholars have referred to as a cyclical bout of “automation fever.” Rather than intervening and interrupting in the trajectories they pave for us, we must find ways of making our co-existence with them more palatable. 

The cumulative buildup of harms leading us down the path of a steady but opaque erosion of our human rights has been a long time in the making. Our daily acceptance of certain deployments and uses of AI, such as those listed above, has created a permissive operating environment and the acceptance of the core logics underpinning how AI is designed. We should resist this trend for several reasons. First, surveillance is a necessary function of many AI models, implying that there will always be some level of privacy violations. Second, the outputs of AI systems, including generative AI, reflect the very skewed, colonial, patriarchal, misogynist, and disinformation-laden hierarchies of knowledge that they are trained on in the first place. Finally, the outputs of AI systems are often seen as predictions, when they’re at best problematic “guestimates” based on often unreliable and inappropriate logics and datasets (“garbage in, garbage out,” as the saying goes in computer science). 

Towards a Human Rights-Based Approach

A human rights-based approach to AI governance would provide the most widely accepted, applicable, and comprehensive set of tools to address these global harms.  Algorithmic-driven systems do not necessarily warrant new rights or a whole new approach to governance  —  particularly in cases where the only change is the unprecedented speed and scale at which data-driven technologies are deployed, amplifying existing human rights risks. 

There’s no need to reinvent the wheel when regulating the design, development, and use of AI. Policymakers should instead apply existing international human rights standards and respect democratic processes in the context of AI. In March 2024, the UN General Assembly adopted the non-binding Resolution A/78/L.49, calling upon states to “prevent harm to individuals caused by artificial intelligence systems and to refrain from or cease the use of artificial intelligence applications that are impossible to operate in compliance with international human rights law.”

Substantive human rights, codified in the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights, are especially relevant to assess the positive and adverse impacts of AI systems on human rights. These include the rights to privacy, dignity, non-discrimination, freedom of expression and information, freedom of assembly and association (including the right to protest), and economic and social rights, among many others. The nine core international human rights instruments focus on the specific needs and rights of marginalized groups, such as women, racialized persons, migrants and refugees, children, and disabled persons. They should be used as a blueprint for centering these groups in AI governance.

Moreover, we see procedural rights as the bedrock for effective AI governance. These are non-negotiable first principles. For instance, any restrictions to human rights in the development and use of AI must be grounded in a legal basis, have a legitimate aim, and be proportionate and necessary. Consistent with the UN Guiding Principles for Business and Human Rights, AI developers and deployers must furthermore conduct human rights due diligence (including human rights impact assessments). This responsibility applies throughout the lifecycle of the AI system, i.e., from the design phase to post-deployment monitoring and potential discontinuation. 

Mandatory transparency should no longer be up for discussion — it is fundamental for enabling effective access to remedy and accountability. What remains missing is an outline of the contours of such transparency, crafted with input from civil society and affected communities, to ensure it is meaningful in practice. An adequate AI governance framework must furthermore include liability and remedy provisions enforced through oversight bodies, judicial mechanisms, and dispute resolution processes. Finally, engaging external stakeholders — especially civil society and marginalized groups — throughout the AI lifecycle must be mandatory. Meaningful engagement requires capacity building, access to information, and adequate resourcing. 

***

“AI safety” can only be a truly valuable goal if it prioritizes the safety of all groups. Right now this concept serves as a distraction from the fact that AI does not impact everyone equally and will have the largest effect on already marginalized groups. 

We must collectively understand and interrogate how AI safety narratives can uphold, obfuscate, and reinforce violence under the guise of efficiency, convenience and security. Taking a page out of Glendenning’s notes, we should also draw a red line around the realms in which AI development and deployment is simply unwarranted and unwelcome. 

Our work must crucially center on imagining how things could be different. What if the enthusiasm and resources spent on AI were redirected to health and social programs? How might communities’ lives improve if the funds used for automated policing were invested in justice and reparations? What if the water consumed by data centers was returned to indigenous communities? What if we had the right to refuse rampant datafication, and have meaningful choice in the aspects of everyday life we want to engage in digitally? 

Civil society’s work often forces us to react to immediate issues, leaving little room to envision and build alternative futures not driven by technology. We urgently need more space to dream and create these visions.

IMAGE: A side-by-side comparison of a human contrasted against a stylized representation of artificial intelligence. (Photo by Ecole Polytechnique via Flickr, CC BY-SA 2.0)