(Editor’s Note: This article is the sixth installment of the Symposium on AI Governance: Power, Justice, and the Limits of the Law).
Countries are competing to become global leaders in Artificial Intelligence (AI), creating new geopolitical tensions in the process. Discussions about these AI-driven transformations are usually framed as a new great power competition between the United States and China, but these are not the only nations aiming to become an “AI superpower.”
The European Union is often described as an AI “regulatory superpower.” The United States and China are battling to become AI “technological superpowers.” Recently, yet another has entered the arena with the United Kingdom positioning itself as a global AI “convening power,” as demonstrated by the AI Safety Summit last month, which focused on frontier AI technologies that some believe pose the greatest risk to humanity.
Many states proclaim a desire to become a global leader in AI, but what exactly this means is still nebulous; what is clear is that a single factor will not define such leadership. Instead, demonstrating true leadership in AI will require excellence across multiple domains: technological innovation, domestic implementation, regulation, and moral legitimacy.
Technological Innovation
Most measures of AI leadership are narrowly focused on technological innovation and measures traditionally aligned with hard power, such as military or industrial capabilities. Here, the United States is the clear leader though China is close behind with a growing pool of available talent and academic research, as well as state-backed investments.
Though the current discourse on technological dominance in the field of AI focuses on state-of-the-art “foundation models,” there are numerous forms AI can take that are often overlooked. These overlooked systems – such as facial recognition algorithms deployed at border crossings–are fairly standard from a technical perspective, but they may have the greatest impact on society.
While important to build technological capabilities, advanced technology is not the only – or even the most important – metric of AI leadership.
Domestic Implementation of AI
To create value, AI must be used; possessing advanced AI capabilities is not enough. AI leadership therefore necessitates widespread and comprehensive integration of AI-based systems throughout a state’s economy and across industries.
Successful integration of AI into businesses will drive rapid growth in productivity and efficiency, driving economic growth and competitiveness. Governments that adopt and implement AI-based systems will be able to provide public services in innovative, accessible, and cost-effective ways. Across industries and sectors being able to integrate and work with AI will lead to clear and tangible benefits, resulting in strategic advantages over geopolitical rivals.
There are also substantial risks, especially for democratic societies, that may accompany the adoption of AI applications at such a scale. These include new opportunities for surveillance, threats to privacy, risks around bias and discrimination, or the proliferation of misinformation and degradation of trust.
To maximize the benefits of AI, societies will be required to navigate a complex landscape where increasing levels of AI adoption have the potential to negatively impact our most closely held values. This difficult balancing act will come to define the trajectory of each country’s AI leadership potential.
Regulatory Environment
Developing a strong regulatory regime domestically and leading the creation of new international agreements on the regulation of AI will be another key step for a state’s path toward global AI leadership. Most debates on the regulation of AI highlight the need to strike a balance between supporting innovation and curtailing potential risks. Yet, what these risks are, and what they may be in the future, are still subject to debate and highly dependent on local context, with some countries at more risk than others.
Due to the importance of this opportunity, states are beginning to position themselves as the new AI rule-makers. This is happening both through the development of domestic regulatory initiatives that will have transnational knock-on effects, such as the recently passed Executive Order in the U.S. or the PRC’s new Generative AI “Measures”, as well as through international fora and initiatives such as the AI Safety Summit, the G7 Hiroshima Process, or the Global Partnership on AI (GPAI).
The European Union and China are currently global AI regulatory leaders, but this can always change. The former is highly experienced in passing complex digital legislation, driving transnational adoption, and is in the final stages of passing the most robust piece of AI regulation to date – the EU AI Act. The latter has also passed key legislation on AI, with specific and detailed attention paid to recommendation algorithms and deepfakes.
There is still ample opportunity to set the “rules of the game” for AI. Countries’ strategic crafting of the evolving regulatory environment both at home and on the international stage will be a key arena for establishing national prominence and leadership.
Moral Legitimacy
Finally, to lead in AI, states must convince their citizens – and the world – that their approach to developing and deploying AI is moral and legitimate. Moral legitimacy is a key component of leadership in AI, particularly given the potential for AI-based technologies to undermine democracies and threaten fundamental individual rights.
At a time when many are beginning to question the sustainability of the current world order, getting this right is essential. The states that succeed in promoting their AI governance models as ethical and legitimate will have a significant advantage in defining the rules of this contested space.
This is precisely what China is doing today, promoting its model of the world as the moral alternative to the current dominant Western worldview. Attempting to position itself as a legitimate global leader, China is seeking international recognition for its AI leadership; to date, most international initiatives have excluded China by design. In the face of international skepticism about Beijing’s sincerity in global engagements, they have still made significant progress with AI.
This is seen in China’s increasing technical prowess, domestic advances in regulating and implementing AI, and increasing recognition from traditionally cautious governments. Furthermore, the Chinese government has launched the Global AI Governance Initiative, initiated early and comprehensive AI regulations, participated in the AI-Safety Summit, and is having success marketing its deployment of existing AI capabilities for “smart cities” as a good governance standard.
While it is possible for the West to find common ground with China on AI, growing mistrust between China and the West severely limits any opportunity for collaboration. As the West actively works to counter Chinese developments, the salience of the Chinese approach is growing, particularly in countries where Beijing’s Digital Silk Road initiatives have been most successful. In fact, from 5G to smart cities, China’s relatively affordable AI-related technology and infrastructure are already far more dispersed throughout the world than many currently realize.
Bringing it all Together
Popular narratives surrounding global AI leadership too often portray a simplistic bilateral technical arms race between China and the United States. The truth is more nuanced. By hyper-focusing on technological dominance, equally important components of AI leadership – developing global moral approval, for example – are not given sufficient attention. This is dangerous but already observable today, with China already attempting to outmaneuver the United States and its Western allies in many important emerging economies.
Beijing has pioneered pragmatic, sector-specific AI governance frameworks and long-term strategic planning, enabling swift translation of capabilities into public sector applications. Yet this tight state-industry fusion risks unprecedented state overreach and requirements for censorship of AI outputs are already limiting domestically developed solutions. Meanwhile, the United States leads in cutting-edge foundational research and boasts nascent governmental efforts at cross-domain risk management. However, lagging regulation and susceptibility to private-sector lobbying obstruct protections and equitable implementation. Europe, on the other hand, is having success developing new AI regulations for the world but is clearly behind when it comes to AI innovation and development.
Given that AI’s multifaceted impacts will pose specific challenges for open and democratic societies, it is essential for the West to collectively counter and overcome the rising challenge from China. This requires not only the development of new cutting-edge AI-based systems, but also investing in domestic AI implementation, developing supportive domestic regulatory frameworks, and demonstrating through practice the positive potential of AI for the world.