On Jan. 13, the U.K. government announced its long-awaited “AI Opportunities Action Plan,” which promises to ramp up “AI adoption across the U.K. to boost economic growth, provide jobs for the future and improve people’s everyday lives.” The plan, drafted by venture capital entrepreneur Matt Clifford, features 50 recommendations which outline what might be required for the United Kingdom to supercharge its AI capabilities. This includes building out an expanded data and computing infrastructure, capturing and nurturing AI talent, “unlocking data assets” for both private and public sectors, and removing AI adoption barriers. It also includes the recommendation to create a “U.K. Sovereign AI” unit to serve as a lynchpin for public-private partnerships and as an instrument to support new and existing private sector frontier AI companies with direct investments and other measures that would help such startups and their CEOs thrive in the U.K. The plan recommends that this new government unit “package[e] and provid[e] responsible access to the most valuable UK owned data sets and relevant research” and “facilitate[e] deep collaborations with the national security community.” The unit must be able to “remove barriers and make deals” and receive sufficient funding to “act quickly and decisively” in a fast-moving environment.
The plan for this new U.K. Sovereign AI unit echoes the U.S. landscape, notably the U.S. Defence Innovation Unit (DIU), which serves as a “U.S. taxpayer-funded venture capital firm […] for financing small startups developing products for military application.” De facto, the DIU acts as a conduit for injecting a dose of Silicon Valley spirit into Pentagon proceedings by advocating for the embrace of more risk with the promise of greater rewards. The proposed U.K. Sovereign AI unit seems to be imagined along similar lines. The U.K. AI Action Plan quite explicitly encourages building up a greater tolerance for “scientific and technical risk.” This is the language and ethos of venture capital investing, but with government funding: ”move fast and break things” on the path to AI dominance.
While the document explicitly notes that regulation and oversight matters, and indeed states that “regulation, safety and assurance have the power to drive innovation and economic growth too,” these elements in the plan lack detail and repeatedly run into tensions with prioritizing innovation. For example, the action plan would require “all regulators to publish annually how they have enabled innovation and growth” of AI, establishing a clear hierarchy whereby innovation trumps regulation. The term used in the document is “pro-innovation regulation.” The challenges to responsible governance of AI are clear here. As a senior lawyer at Pinsent Mason noted, “If a regulator were to resolve an issue in favour of protecting rights rather than promoting innovation ‘at the scale of the government’s ambition’, then the action plan suggests they would face the possibility of being sidelined and overridden.” In other words, regulators are expected to not impede AI development, but also to promote AI development, and they will be held to account for this according to the plan.
Here again, the action plan reflects current trends toward deregulation, experimentation, iteration, and post-hoc fixes, as well as prioritizing a speculative benefit in the future over safeguarding against foreseeable risks to human rights now. But a government is not a business. The parameters that (should) matter for governments are not the same as those that guide business principles, or indeed, the technology industry at large. To be sure, a government’s responsibility is to facilitate economic prosperity, but this must be done while also ensuring that its people do not come to undue and unnecessary harm. This includes facilitating robust impact assessments before rolling out new technologies. It requires a measured approach based on needs, risks, and benefits, not an unbridled determination to “mainline AI into the U.K.’s veins.” Responsible governance requires prudence and caution. The question: “what if AI is not the future?” should be front and center in crafting the U.K. AI Opportunity Action Plan, as should be a careful attentiveness to second and third order challenges – including energy and water demands for data centers, privacy concerns regarding sensitive data, and the question of accountability when things go horribly wrong. The ethical stakes in the context of AI governance are high.
With this plan, the U.K. government seems to want to have its cake and eat it too: nominally appear concerned about responsible AI (understood largely in narrow AI safety terms) while also being perceived as in step with the increased global mandate for accelerated innovation (very narrowly defined as AI). As I have highlighted in the context of responsible AI governance in the military domain, there are intrinsic tensions between the aim to govern this technology responsibly and the interests and ethos of the private sector involved in producing the tech. To put it differently, if the realities of AI are incommensurable with the ideals of responsible governance, then explicit attention must be paid to the limits of AI – where, when, and why not to use it.
To most non-industry observers, there is something unsettling about watching the government put all its eggs in the AI basket, despite ample and credible warning that AI may soon plateau as a driver for progress and financial wealth. One might have hoped for a more extensive consultation, including a wider set of stakeholders, before announcing that the U.K. government would adopt all 50 of the recommendations, which are decidedly skewed toward industry and reflect primarily the perspectives of the technology-entrepreneur landscape.
Indeed, the language with which the plan was announced (and defended on various media platforms) by the U.K. Secretary of State for Science, Innovation, and Technology, Peter Kyle, is recognizably that of the technology industry. Whether that is the expressed concern that “the U.K. risks falling behind” and “there is no time to waste,” or the dismissive lament that “for too long we have allowed blockers to control public discourse and get in the way of growth in this sector” (italicized for emphasis), the words uncomfortably echo dramatic gripes about limits and regulation uttered by the more notorious U.S. technology oligarchs over the last decade.
But perhaps most disappointing was the launch announcement speech made by U.K. Prime Minister Sir Keir Starmer himself who, toward the end of his remarks, addressed the voices urging caution with a patronizing tone familiar from the tech industry, urging those in public service to be much bolder and press ahead unperturbed:
It’s entirely human: a new technology can provoke a reaction, a sort of fear, an inhibition, a caution, if you will. And because of the fears of a [small risk], too often, you miss a massive opportunity… So we’ve got to challenge that mindset, because actually, the far bigger risk is, that if we don’t go for it, we’re left behind by those who do. And that’s what I mean by totally rewiring government — be emboldened to take risks, as our brilliant entrepreneurs do, restless and relentless, because the prize within our grasp is the path to national renewal. And AI is the way.
Here, Starmer does three things: first, he frames caution as a hindrance, as an unhelpful mindset. Second, he elevates the maverick entrepreneurs and their risk-embracing ways to a position of moral superiority and, third, he embraces the quasi-religious tone of “national renewal,” promising a cleansing national rebirth through AI.
The language Starmer uses is straight out of the recent U.S. defense startup playbook. Illustrative here is a recent document issued by the Shyam Sankar, CTO of the defense company Palantir – The Defence Reformation: government processes are outmoded, too slow and no longer fit for purpose; caution is a hindrance, the market should be able to regulate what works and what does not; risks, creativity, and improvisation should be embraced; individual mavericks should be given leeway; “the only requirement is winning”; painful reform is required. The United States’ industrial base will, eventually, be resurrected and prosperity and security will ensue. If this sounds somewhat exaggerated, it is because the key talking points in the AI advocacy discourse — in the defence sector and elsewhere — are often starkly overdrawn, hyperbolic, and speculative. It would have been reassuring to hear more on how the U.K. government will think about the already clearly existing and foreseeable risks of AI.
I am not suggesting that a venture capital ethos underpins the entire action plan, but I do wish to point out that the language used to promote the plan might betray a certain mindset of its own — one that overtly buys into technology industry interests, perspectives, and priorities — and that this should concern the wider public, especially as the plan seems to address all aspects of government, including, presumably, defence.
The enthusiasm with which the plan aims to “rewire” government has policymakers and academics concerned with responsibility and rights in the realm of military AI somewhat ill at ease. The U.K. AI Opportunities Action Plan was released only a few days after the Defence Committee report on military AI, an expressly industry-focused report which insists that the U.K. Ministry of Defence must more fervently embrace AI in defence and “move fast to avoid falling behind.” Like the U.K. AI Action Plan, the Defence Committee report advocates for a cultural shift which accommodates more start-up type AI companies and in which “decision-makers feel more empowered to pursue innovative solutions, to take calculated risks, and to acknowledge and learn from failures.” Especially with a technology like AI, the lines between civilian and military domains of application become increasingly blurred. The U.K. AI Opportunities Action Plan author Matt Clifford holds investments in one of Europe’s most highly valued military AI start-ups, Helsing GmbH.
In December 2023, the House of Lords AI in Weapon Systems Committee issued a detailed report on AI in defense, based on witness input from a wide and diverse range of stakeholders. The report was appropriately titled: “Proceed with Caution.” It seems that with the U.K. AI Opportunities Action Plan, the government has thrown caution to the wind in favour of an uncertain, speculative benefit. But where livelihoods are at stake, which they more often are than not in matters of government, caution should be a first principle, not an inconvenient afterthought.