“Technology favors tyranny,” wrote historian and philosopher Yuval Noah Harari. This sentiment echoes growing concerns that rapid advances in Artificial Intelligence (AI) will enable new forms of digital authoritarianism and erode democratic norms. From biased algorithms to mass surveillance, AI-enabled technologies could exacerbate inequalities and data protection concerns, while introducing new risks to individual safety, national security, and possibly even the fate of humanity itself.
Underlying these problems is a fundamental concern about “AI power” – the potential for AI to shape the prospects, options, and attitudes of human beings. In an increasingly connected world, AI-based tools allow fewer individuals to shape the choices of many and at larger scales, with potentially catastrophic results. Governments, for example, use AI to surveil dissidents, track undocumented immigrants, influence elections, and allocate scarce resources. Corporations, too, rely on AI to determine the goods, services, and content available to consumers. These power shifts are already underway and poised to intensify in the next decade.
In the race to acquire AI power, some will inevitably be left behind. Only a small group of countries and corporations control access to the infrastructure, computing hardware, and data needed to build and train advanced AI models. Those same actors are dominating the debate on AI governance, further entrenching the secretive, complex, and opaque nature of AI systems.
The concentration of AI power raises important and difficult questions: Who benefits from AI power and what rights do they have? How does AI power create or alter human rights, duties, and obligations? Who has the proper authority to exercise AI power and under what conditions? What institutional, policy, and legal guardrails are needed to ensure this power is exercised in legitimate, just, and accountable ways?
In the wake of recent policy proposals on AI – including in the United States, United Kingdom, and at the G7 – it is more crucial than ever to debate these questions. By bringing together leading technologists, political scientists, legal scholars, and ethicists, this Symposium offers recommendations to mitigate the costs and assure the just rewards of these technologies.
The Symposium includes the following articles, with more published each week.
- Simon Chesterman, “The Tragedy of AI Governance.”
- Gavin Wilde, “The Path to War is Paved with Obscure Intentions: Signaling and Perception in the Era of AI.”
- John Zerilli, “Process Rights and the Automation of Public Services through AI: The Case of the Liberal State.”
- Arthur Holland Michel, “Is AI the Right Sword for Democracy?”
- Talita de Souza Dias and Rashmin Sagoo, “AI Governance in the Age of Uncertainty: International Law as a Starting Point.”
- Kalya Blomquist and Keegan McBride, “It’s Not Just Technology: What it Means to be a Global Leader in AI.”