The use of AI is advancing at almost incomprehensible speeds, driving decision making that has broad impacts on society, and with the potential to dramatically impact human rights. Indeed, AI has the potential to affect nearly every recognized human right, including the rights to freedom of expression, thought, assembly, association and movement; the right to privacy and data protection; the rights to health, education, work and an adequate standard of living; and non-discrimination and equality. AI may also give rise to the need to instantiate new forms of rights, such as the right to a human decision.
The human rights abuses that can occur through the implementation of AI systems include facilitating mass surveillance as well as perpetuating bias in the criminal justice system, healthcare, education, the job market, access to housing, and access to banking, thereby exacerbating discrimination against already marginalized groups. AI is also having a negative impact on democracies through facilitating the spread of disinformation, the creation of deep-fakes and synthetic media to sow chaos and confusion, and the removal of content documenting human rights abuses. At the same time, AI has the potential to benefit human rights, from facilitating advances in healthcare to tracking supply chain compliance.
Both governments and corporations have a duty to respect human rights. The international human rights regime is an ecosystem of established laws, frameworks, and institutions at the international, regional, and domestic levels within which individuals can seek respect for their human rights as well as remedies for human rights violations. Although government and industry leaders often affirm the centrality of human rights in the development and deployment of “responsible” AI systems, all too often this takes the form of general principles or statements that are either difficult to implement in practice or neglect to consider the full range of potential use cases. As advances in AI accelerate, human rights need to be integrated into every level of AI governance regimes.
In February 2024, the Promise Institute for Human Rights at UCLA School of Law convened a symposium on Human Rights and Artificial Intelligence, bringing together leading experts to examine some of the critical questions arising from the rapid expansion of AI and the lagging governance models. The purpose of this symposium – a collaboration between the Promise Institute and Just Security – is to share some of the insights captured by our speakers with a broader audience, and to elevate some of the most pressing questions about the relationship between AI and human rights.
New articles in the series will run each week, tying together the following themes:
Generative AI and Human Rights
- Raquel Vazquez Llorente and Yvonne McDermott Rees, “Truth, Trust, and AI: Justice and Accountability for International Crimes in the Era of Digital Deception.” Drawing on their deep expertise of synthetic media – artificially produced or manipulated text, image, audio or video – and its impact, Llorente and McDermott Rees highlight how the use of deep fakes has the potential to undermine trust in the online information ecosystem. They point to concerns about the “liars dividend,” which creates an environment where deepfake content can both make it easier to question the veracity of all content and act as a mechanism to further entrench beliefs and narratives. They explore what impact AI-generated content may have on justice and accountability for human rights violations and suggest some ways in which we can prepare for a hybrid AI-human media ecosystem.
- Shannon Raj Singh, “What Happens When We Get What We Pay for: Generative AI and the Sale of Digital Authenticity.” Coupled with the rise of deepfakes, Singh writes about how the degeneration of verified accounts – a visual indicator historically used to show that the account is from a trusted source such as a legitimate media outlet or public figure – is leading to an overall crisis about the legitimacy of our information environment. She stresses that social media account verification should be considered a public good as it allows us to know what information to trust, and that social media users are not well equipped for the “coming wave of AI-generated misinformation” under the current pay-to-play verification system.
- Natasha Amlani, “AI Exploitation and Child Sexual Abuse: The Need for Safety by Design.” Continuing our exploration of the impact of AI generated materials on human rights, Amlani’s piece challenges us to think about the ways in which deepfake child sexual abuse imagery affects child safety and the impact on the child safety reporting system. To mitigate some of these harms, she implores tech companies to start thinking about safety by design when launching new products and features that have the potential to impact children.
The Impact of AI on Marginalized Communities
- Rebecca Hamilton, “The Missing AI Conversation We Need to Have: Environmental Impacts of Generative AI.” Hamilton reveals the hard truths about the environmental impact of generative AI as it consumes vast quantities of energy and water. She notes that the communities most likely to be impacted are those already most marginalized, particularly in the Global South, and that the narrative of AI development needs to be rewritten to ensure that these high environmental costs are understood by the global community.
- S. Priya Morley, “AI at the Border: Racialized Impact and Implications.” Morley examines how AI is being used as the latest tool of U.S. border externalization policies that impede migrants from reaching U.S. territory and seeking asylum, as well as a tool to continue surveillance at and within borders. She argues that AI is exacerbating and compounding the racial discrimination already driving these policies, having a particularly harmful impact on Black migrants.
AI Governance, Rights, and International Law
- Michael Karanicolas, “Governments’ Use of AI is Expanding: We Should Hope for the Best but Safeguard Against the Worst.” Karanicolas examines how AI expansion across U.S. government agencies has the potential to chip away at fundamental rights. He underscores the need to ensure appropriate oversight of AI tools, and suggests a couple of different models for how that could be structured. In particular, he recommends the creation of a specialized, independent, multi-stakeholder body that can push back against poor decision making to ensure transparency and increase public trust in AI systems.
- Sarah Shirazyan and Miranda Sissons, “How can AI Empower People to Exercise Rights and Freedoms Guaranteed under International Human Rights Law?” Writing from the perspective of team members at Meta, Shirazyan and Sissons discuss some of the ways in which AI can empower the rights and freedoms guaranteed through international human rights law. In particular, they explore how AI can strengthen freedom of opinion and expression through improving access to information, skills, knowledge and empowering expression; freedom of equality and non-discrimination through increasing accessibility and language inclusivity, and; freedom from physical and psychological harm through AI-driven consistent content moderation and protecting human moderators from the most harmful content.
- Marlena Wisniak and Matt Mahmoudi, “Beyond AI Safety Narratives: How to Craft Tech-Agnostic and Neo-Luddite Futures.” Human rights advocates Wisniak and Mahmoudi pronounce that our future must be centered on social justice and the realization of rights, rather than the pursuit of techno-solutionism – the idea that technology can solve any problem – driven by AI. They underscore how approaches to AI governance must be grounded in the existing international human rights law framework, which provides substantive and procedural rights that can protect individuals from some of the worst potential impacts of AI.
Taken together, this symposium provides a rich picture of how AI can be used both to uphold and violate human rights. Widespread AI use is inevitable – policymakers must move quickly to ensure fundamental rights and freedoms are protected around the world.