(Editor’s Note: This article is the first in a series that will dive deeper into the foundational barriers to the broad integration of AI in the IC – budget, acquisition, risk, oversight, and culture. This first article provides an overview of the issues.)
Congress wants to pour hundreds of billions (yes with a B) of dollars into the federal government to increase the nation’s competitiveness in emerging technology and, in particular, to accelerate the development of artificial intelligence (AI) technologies that are vital to protecting our national security. The bipartisan support shown for the U.S. Innovation and Competition Act (USICA) – the bill that provides these funds – is a noteworthy and important step in ensuring the United States is resilient and competitive in the 21st century. And that kind of money is nothing to sneeze at. But can the federal government manage to spend it?
Thanks to China’s aggressive, whole-of-nation approach to emerging technology and the ubiquity of AI technologies that adversaries big and small are now poised to exploit, there is a sudden urgency around AI and national security. In addition to the USICA, the National Security Commission on AI has produced sixteen chapters of recommendations over the last three years (along with several quarterly, interim, and special topic reports) and several prominent think tanks have produced their own reports on AI and national security. While there are open questions about ethics and proper implementation that must be addressed, there is no question that the United States must figure out how to address them and quickly take advantage of AI to continue to be a leader on the world stage.
The USICA, a $200 billion proposal, dramatically expands federal government support for technological growth and innovation, and strengthens U.S. national competitiveness. It proposes new incentive programs and increased research and development funding in areas like AI and microelectronics, as well as the creation of new offices and increased public investment, lending, and trade abroad to support key technology focus areas and to counter China’s influence. The USICA is an important measure to protect and promote U.S. research and innovation and to drive national strategic advantage on the global stage.
However, having spent the last 20 years in the U.S. government, 15 of them in the Intelligence Community (IC), I believe that without a visible, concerted effort to revisit current budget, acquisition, risk, and oversight frameworks – led by the Director of National Intelligence (DNI) and IC leadership – the IC will not be able to effectively identify, develop, and incorporate in real-time the technological advances needed to keep its competitive edge, regardless of how much USICA money comes its way.
AI systems have the potential to transform how the IC makes sense of the world, rapidly and at scale. To discover secrets and provide policymakers with exquisite intelligence and insights at mission speed, the IC must be able to quickly and accurately sort through vast amounts of data to find patterns, uncover connections, understand relevance, and draw conclusions in real-time. Without the advantages that evolving AI will continue bring, the IC will quickly be outmatched by the nation’s formidable adversaries.
But, as we have seen before, money is necessary but not sufficient. In the immediate aftermath of the September 11, 2001 attacks, Congress threw a great deal of money at the counterterrorism problem, much of it aimed at helping the IC “connect the dots.” This goal included a steady focus on sharing information across agency lines with those who needed it so that the IC would never again fail to discover or understand information it already had in its holdings. However, the IC quickly realized that money alone could not solve that problem. Among other things, the IC had to tackle foundational issues involving cultural resistance and inconsistent, complex authorities. The IC has come a long way on information sharing 20 years after 9/11, but it still has more work to do.
Similarly, when it comes to the IC being able to take advantage of significant advances in AI, the jury is still out. To take full advantage of what AI has to offer, the IC will have to work closely with the private sector, where much of the emerging technology is being developed. Unfortunately, when it comes to that kind of partnership, the IC’s track record is mediocre, at best. Time and again we have seen that the national security community – including in particular the Department of Defense (DOD) and the IC – has serious, basic hurdles to clear if it is to partner smartly with the private sector to harness innovation and emerging technology like AI. Existing budget and acquisition processes are laborious, complex, and slow; the IC neither sufficiently understands nor accepts risks associated with AI technologies; and congressional oversight processes compound these issues. Moreover, within the national security community the cultural barrier to trying new things persists, despite the fact that there is jaw-dropping innovation and creativity in pockets.
DOD has started tackling some – but not all – of these issues. It has recently piloted new approaches to software procurement, received important legislative flexibilities to deal with acquisition bottlenecks, and driven the need for innovation and emerging technologies from the most senior levels of the Department. This is an encouraging start.
Unfortunately, the IC is significantly further behind DOD, despite the fact that the IC is part of the same national security community. The IC and DOD must and do work hand-in-hand when it comes to national security; it is key to ensuring that the U.S. government operates in a unified, informed manner when executing critical national security missions. And it is not uncommon that if DOD needs greater flexibility or new authority, the IC also could use some version of the same. This is not only beneficial for the IC’s distinct purposes, but also enormously helpful in ensuring the IC and DOD can work seamlessly together.
But the IC has not developed the same pilots, received the same legislative tools, or enjoyed consistent senior-level support for the necessary changes that will enable AI in the IC. The IC must learn from DOD’s advances, quickly modify and adapt DOD’s successes, and work closely with DOD going forward to more holistically create and scale foundational improvements.
Fundamental Issues – Falling Through the Cracks
The IC’s budget and acquisition processes rise to the top of the issues the IC must address before it can take full advantage of AI. These processes were created for good reason – they help ensure fairness, clarity, and proper use of taxpayer dollars. But the inflexibility and complexity of them reflect a Cold War view that the threats to our nation are stable and predictable, and that the government is primarily responsible for the development of cutting-edge technology. Neither of these beliefs remains true and so the outdated acquisition and budget frameworks within which the IC still operates has become arduous and incompatible with what this moment demands.
Specifically, the IC’s budget processes are neither flexible nor fast enough to adapt to evolving requirements for AI; they are mired down in a three-year budget cycle that demands early certainty in cost, schedule, and requirements. The IC’s acquisition processes also are complex, rigid, and slow, confounding private sector partners and IC professionals alike, and deterring smaller organizations from even attempting to partner with the IC. These two interrelated processes prioritize strict compliance with rules and regulations, and predictable cost and schedule over performance and mission effectiveness – a problematic approach if your goal is a high-performing and effective national security community. The IC must introduce more adaptability and flexibility into these activities to not only tolerate the fluctuations that come with the development of AI, but to encourage and embrace its evolution.
In addition, emerging technologies like AI are not always initially successful and bring a substantial risk of failure. However, the federal government is not a Silicon Valley start-up; it does not look kindly on failure, no matter the potential future payoff. The government is understandably hesitant to take risk, especially when it comes to national security activities and taxpayer dollars. But there is rarely a distinction between big failures, which may have lasting, devastating, and even life-threatening impact, and smaller failures, which may be mere stumbling blocks with acceptable levels of financial impact that result in helpful course corrections. Failure of any kind is unacceptable and those who are accountable can pay dearly in reputation, upward mobility, and even personal liability.
To speed and scale the integration of AI into the IC, the DNI must promulgate an IC-wide AI risk assessment framework that outlines acceptable and unacceptable failure, risk parameters, and levels of decision-making authority, among other things. Such a framework would provide a strategic understanding of, and backing for, various types of risky actions and potential failures, that individual IC organizations could then tailor to their unique missions.
And both the IC and Congress must radically re-think oversight processes. Congressional oversight is critical and non-negotiable. However, oversight today is focused on certainty, clarity, and stability of IC actions, characteristics that are fundamentally incompatible with the nature of emerging AI technologies. AI is, by design, always evolving, learning, adjusting, and finding unpredictable solutions to intractable problems; this is a feature, not a bug. So there is a fundamental misalignment between the development and use of AI technologies and current oversight processes that leads to unsatisfying interactions for both the executive and legislative branches, and often results in discord, distraction, and delay. Congressional intelligence oversight processes must become more adaptive and accepting of change and uncertainty, even as they must continue to ensure appropriate spending of taxpayer dollars.
It is past time for a paradigm shift away from the IC’s traditional, linear processes to more iterative and dynamic approaches that will speed and transform the way the IC purchases, integrates, and manages the use of AI. These four foundational areas – budget, acquisition, risk, and oversight – are critical to the IC’s ability to develop, acquire, scale, integrate, and use AI rapidly in response to today’s national security challenges. If they are not addressed, the IC workforce will be loath to pursue risky cutting-edge technologies like AI. But one more crucial area also needs attention.
The Elephant in the Room
It is no less true in the IC than it is elsewhere that culture eats strategy for breakfast. No matter how strong and convincing your strategy, the human component will determine the success of your activities. In the IC, there are pockets of creative, smart, and motivated innovators with the patience and fortitude to brute-force new technologies like AI into the system because they recognize that new ideas and technology will transform the world of intelligence. However, the underlying culture of the IC does not readily facilitate innovation or those seeking to innovate – whether it is new processes, new activities, new widgets, or new ideas, there is often a tenacious resistance to change that crushes all but the hardiest innovators.
This is not because people are malicious or intentionally want to stymie progress. Rather, it is easier, more comfortable, and safer to stick with what has been done in the past – even if not wholly successful. Repetition and routine of processes and activities provide a level of security and confidence that we are on a good path that involves little risk of failure. But this kind of approach also assumes our environment is static; that what we needed before is what we need now, and that the approach of the past is acceptable for the future.
We do not live in a static environment, of course, as is evidenced by the Senate’s swift passage of the USICA. The world around us is changing exponentially and we now know that what has worked well in the past does not work well for many current and future requirements. The time to act is now. Access to and ubiquity of emerging technology like AI has become one of today’s crises – to meet it, the IC needs to shift its culture to one that embraces change and innovation. The IC must seize this moment to adapt its approach and processes in significant ways – in terms of speed, agility, and willingness to take risks.
And Congress must broaden its support for AI beyond the wallet – it must embrace and enable more flexibility and agility in the IC’s budget, acquisition, risk, and oversight processes in order to realize the vision that the USICA portends.
Because much like the old question, if a tree that falls in the forest with no one to hear does it make a sound… if Congress appropriates vast sums of money, but no one can spend it, will it make a dent?