Editor’s Note: This article is the final installment of our Symposium on AI and Human Rights, co-hosted with the Promise Institute for Human Rights at UCLA School of Law. The authors work at Meta, but their views do not represent company policy. 

The rapid proliferation of AI and generative AI tools has sparked a vibrant public debate over the past year, focusing on the near and long-term risks associated with their adoption and use. Though robust, the current debate has focused primarily on the risks, rather than benefits, of these technologies. We should not only acknowledge the risks of generative AI – a subset of artificial intelligence that generates novel text, audio and video content – but also recognize the crucial potential of these technologies to advance human rights

AI has enormous potential to enhance every human right and freedom. Devoting more efforts to understanding the human rights benefits of these technologies will not only improve the public conversation but also help harness their potential. For example, AI can improve disease surveillance and predict disease outbreaks, allowing people to better enjoy their right to physical and mental health. The use of AI for clinical decision-making and faster MRI scans can lead to improved health outcomes and access to quality healthcare. AI can enhance public services by making them more efficient and accessible, and also optimize government resource allocation and budgeting, thereby advancing citizens’ right to participate in government and access public services without discrimination. Humanitarian organizations can harness the power of big data to enhance society’s response to disasters and crises globally, including by identifying areas of greatest needs. This in turn can enable affected communities to exercise a host of rights, including the right to adequate housing

This article focuses on three fundamental rights and freedoms: freedom of expression and access to information; freedom from physical and psychological harm; and equality and non-discrimination. These rights and freedoms are enshrined in the Universal Declaration of Human Rights (UDHR) and are among those that the United Nations Human Rights Office of the High Commissioner has identified as being at risk from generative AI. 

Freedom of Expression and Access to Information

Under Article 19 of the UDHR, “Everyone has the right to freedom of expression, which includes the freedom to hold opinions, as well as the right to receive and impart information and ideas through any media and regardless of frontiers.” 

AI and large language models (LLMs) — a specific category of generative AI models that can understand and generate human-like text — enhance people’s ability to access information, an integral part of the freedom of expression. These tools allow people to exercise their freedom of expression rights in novel ways, including:

  • Instant access to information and exchange of ideas: Through the use of generative AI tools, people can instantly access a wide range of information on various topics, synthesize complex information, exchange ideas and explore different perspectives through engaging in conversations with AI agents. Unlike the static search results of the traditional internet, LLMs offer a conversational and interactive experience. You can ask personalized questions, get intuitive answers, and explore complex topics in a more natural way, enabling more precise and intuitive information retrieval
  • Democratizing skills and knowledge: Generative AI tools can serve as a powerful learning resource, providing explanations and insights that help individuals understand and advocate for human rights, democracy, and social justice, especially in the countries where governments control access to information. By leveraging LLMs, small and grass root organizations can now access capabilities that previously required significant resources and large teams, such as graphic design and coding. This enables community-based legal and support organizations to create engaging visual content to reach wider audiences to effectively convey stories and promote social change. For example, This Climate Does Not Exist is a project driven by a group of scientists that harnesses AI to create images of personalized climate impacts to raise awareness about environmental devastation and overcome the barriers to action. More broadly, democratizing access to technical skill sets through open source LLMs creates a more equitable distribution of economic opportunities, fostering innovation and a more inclusive and stable economic landscape. Social and economic stability is often a precursor to the establishment and sustenance of rule of law and rights-based systems. 
  • Empowering satire and artistic expression: International human rights law provides broad protections for artistic expression, including satire and other uses of arts to challenge people and institutions. Throughout history, satire and parody have served as a powerful form of defiance against authoritarian regimes globally. Just as the internet redefined satire by providing a platform for creators to produce and disseminate satirical content, amplifying their voices and reach, generative AI tools for image production can further spark conversations about inequality, injustice, and other power dynamics. For example, AI-generated art has made parodies of politicians attempting to push anti-LGBTQ+ bills.

Freedom from Physical and Psychological Harm

Article 3 of the UDHR affirms individuals’ right to liberty, security of person, and life. Similarly, under the Article 16 of International Covenant on Civil and Political Rights (ICCPR), “Every individual shall have the right to enjoy the best attainable state of physical and mental health.” 

Safer online environments can support and advance these rights. Generative AI and LLMs can significantly contribute to content moderation, mitigating risks to individuals’ well-being by enhancing child safety, thwarting violent incitement, and addressing behaviors and content that groom potential victims or lure individuals into situations of human trafficking. They’re already showing promise in identifying and mitigating online bullying, harassment, hate speech, and incitement to violence, safeguarding individuals’ right to security and well-being. This potential is evident in two key examples.

  • Identification of and addressing potentially harmful content: Identifying and addressing potentially harmful content is a critical challenge for online platforms. AI remains an essential tool in this effort, enabling platforms to scale their policy decisions and improve their response to evolving threats. AI technologies have already been useful in identifying near-duplicates of previously removed content and streamlining efforts to address false or misleading content online. Similarly, AI systems like Few-Shot Learner can adapt quickly to new and evolving types of harmful content, supporting over 100 languages. Building on these advancements, Meta has started testing LLMs by training them on its Community Standards to help determine whether a piece of content violates our policies or not. These initial tests suggest the LLMs can perform better than existing machine learning models, or at least enhance tools like Few-Shot Learner, and we’re optimistic generative AI can help us enforce our content policies in the future. These developments have the potential to significantly improve the accuracy and efficiency of content moderation, freeing up capacity to focus on more complex cases.
  • Consistent application of platform policies: Most online platforms rely on a combination of human moderators and automated systems to identify potentially harmful content, determine whether platform policies were violated, and take down (or otherwise restrict) posts as necessary. Human moderation decisions can vary based on personal observations and interpretations, leading to inconsistencies in judgment calls. Automating parts of the content moderation process with LLMs may help reduce human bias and emotion, leading to safety benefits via improved reliability and accuracy in policy enforcement and reducing the likelihood of unwanted outcomes. As LLMs mature and support more languages, they may provide a pathway to more equitable deployment of content moderation across languages. Trust and safety experts Dave Willner and Samidh Chakrabarti have persuasively outlined the potential for LLMs to provide more consistent and effective content moderation than human teams. They argue that compared to human teams, LLMs are likely to be easier to set up, simpler to supervise and more capable of maintaining consistency. Willner and Chakrabarti also assert that to achieve the full potential of LLMs, content policies must be crafted specifically for LLMs, with minimal subjectivity, clear definitions of key concepts, and granular categorizations. Ultimately, effective content moderation is vital for fostering safe and inclusive online environments, particularly for marginalized communities that may be disproportionately affected by moderation mistakes. 

Equality and Non-discrimination

Under Articles 1 and 2 of the UDHR, “all human beings are born free and equal,” regardless of “race, color, sex, language, religion, politics, or where they were born.” Once again, AI has tremendous potential to promote these fundamental rights, including via:

  • Improving accessibility: AI-powered captioning, image recognition, and translation tools improve accessibility. Tools for image recognition can help people who are visually impaired better navigate both the internet and the real world, thereby promoting equality and non-discrimination. For example, Ray Ban smart glasses, with advanced camera technology and built-in AI, can provide real-time image processing and object recognition, converting visual information into speech. Users can ask questions about their surroundings, receive auditory information about their environment, read text aloud, recognize faces, or get directions. This technology has the potential to greatly improve accessibility and independence for individuals with visual impairments. Similarly, Microsoft’s Seeing AI mobile app serves as a visual assistance aid by generating highly detailed descriptions of photos and allowing users to ask natural language questions about images or documents. Humanoid robots, like QTRobot, use facial expressions, gestures, and games to teach children with autism about communication, emotions, and social skills. Ultimately, generative AI may support people with disabilities by rapidly improving assistive technologies and robotics to provide tailored educational and healthcare approaches. 
  • Expanding language inclusivity: Generative AI tools such as language translation models help bridge language gaps and make information accessible to vastly more people worldwide. This is potentially transformative. Currently, only 12 languages are strongly represented on the internet, with English dominating about 55% of online content. This leaves most of the world’s 7,000 languages underrepresented or absent from the online world. The development of more advanced language support tools can promote equality by ensuring individuals can access resources and opportunities regardless of their linguistic and cultural background. For instance, initiatives like Meta’s No Language Left Behind (NLLB) uses open-source AI models capable of delivering high-quality translations between 200 languages — including lesser-spoken languages like Asturian and Luganda. Similarly, foundational multimodal models for speech translation, like SeamlessM4T, can help translate and transcribe across speech and text. Google Cloud’s Translation API can help businesses to accelerate translation use cases with AI. AI and LLMs can break down language barriers, allowing people to communicate with anyone, anywhere, regardless of their language preferences. By leveraging this technology, international human rights organizations can translate critical resources, such as advocacy materials, into diverse languages. This will help them expand their reach and empower marginalized communities to access vital information and assert their rights. 

Striking the Right Balance and Risk Mitigation Strategies   

AI has the potential to promote freedom of expression, equality, and safer online environments. In order to fully realize these benefits, appropriate safeguard and risk mitigation strategies must be put in place. Collaboration among diverse stakeholders – including industry, the human rights community, and policymakers – is essential for ensuring AI tools are designed and deployed in ways that promote and protect fundamental rights, while mitigating potential risks. 

Without a responsible approach this technology may carry notable risks. The risks of AI are well-known. For example, without careful consideration, AI models may learn existing problematic biases (e.g., political, gender, racial, religious and etc.) and stereotypes from the data they are trained on, recycling them in generated outputs. Researchers and users have observed examples of political bias, such as when a LLM provides information about a politician from one side of the political spectrum but fails to do so for a competing politician from the other side. This may compromise individuals’ right to non-discrimination and lead to unequal outcomes for various social groups. Models may hallucinate, generating misleading or false information. Threat actors may use the technology to amp up disinformation campaigns. Models can inadvertently disclose private information or be used by bad actors to create synthetic non-consensual intimate images grossly violating people’s right to privacy

How can these risks be addressed in practice? Mitigating the challenge of AI hallucination and bias requires training the models on diverse and representative datasets (including high-quality non-English data); strategic pretraining of the models to mitigate political bias; bias audits and monitoring; de-biasing techniques; standardized AI safety benchmark and trustworthy measurements; as well as  human oversight and review. The increasing use of synthetic data holds great potential to mitigate privacy risks. It can help protect privacy by allowing organizations to work with data that doesn’t reveal personal information. Mitigations against risks to political participation rights (e.g., content that attempts to interfere with voting) include direct and indirect disclosure tools, such as content labels, watermarking and content provenance & authenticity signals. These also include norms and expectations around transparency, disclosure, and distribution by creators, publishers, and distribution channels such as those defined in the Partnership on AI’s Responsible Practices for Synthetic Media

Cultural, social, and historical differences significantly impact how AI systems are experienced and affect various communities. Therefore, understanding local context is crucial to grasping and mitigating potential human rights risks from AI. To effectively address these risks, it’s important to engage with diverse stakeholders and incorporate their perspectives into AI development and policy-making. This can involve collaborating with human rights experts, advocacy groups, and other external stakeholders to gather well-informed feedback.

In practice, this means seeking input from a broad range of stakeholders whenever new policies or products are developed, including those related to AI. By gathering diverse perspectives from experts and advocacy groups across the political spectrum and regions, AI labs and technology companies can better understand the potential impacts of their technologies and ensure they align with human rights principles. Our own work in human rights and stakeholder engagement at Meta has underscored the importance of gathering a diverse range of well-informed perspectives. For instance, our recent approach to labeling AI-generated images on Facebook, Instagram, and Threads was informed by external engagement with hundreds of stakeholders worldwide, helping us understand the benefits and drawbacks of transparency with AI-generated content.

AI is a transformative technology with the potential to greatly enhance human rights and freedoms. However, the initial phase of public discussion around GenAI and LLMs has focused on analyzing the technology’s societal risks, including understanding whether and to what degree these tools impact the information ecosystem and pose novel societal risks. As policymakers race to regulate this technology in response, it is important not to overlook the many human rights benefits that AI will bring with appropriate risk mitigation strategies in place.

IMAGE: Artistic rendering of AI. (Photo by mikemacmarketing via Flickr, CC BY 2.0)