On Tuesday, members in the House and Senate introduced new versions of the USA Freedom Act that would prohibit bulk collection of records under Section 215 of the Patriot Act, the FISA pen register authority, and national security letter statutes. The legislation, if passed, would result in significant changes to the National Security Agency’s bulk phone records program, raising questions about the impact such prohibitions could have on the Intelligence Community (IC). This makes it a good time to revisit analyses of the utility of bulk collection programs. The National Research Council (NRC) recently published one such analysis, “Bulk Collection of Signals Intelligence: Technical Options.”

The NRC’s report threatens to serve as the foundation for future thought about how the IC should respond to the age of big data, which is why its analysis is both important and problematic. Its baseline conclusion is that “[t]here is no software technique that will fully substitute for bulk collection where it is relied on to answer queries about the past after new targets become known.”

Outside of the narrowest technical context, this conclusion is fundamentally wrong. In practice, targeted approaches will more than substitute for bulk collection. 

The NRC Report. The NRC report is the result of a process started by Presidential Policy Directive 28, which asked the Office of the Director of National Intelligence (ODNI) to “assess the feasibility of creating software that would allow the IC to conduct targeted information acquisition [of signal intelligence] rather than bulk collection.” The ODNI requested that the National Academies form a committee to study the question. The NRC report is the product of that committee’s work.

The NRC’s conclusion is based on comparing prospective collection methodologies against intelligence time machines. An intelligence time machine is a concept first articulated to me by network security expert Eric Rescorla. The idea is that there may be some point in the future when an intelligence analyst will wish he had collected some piece of intelligence in the past. But he can’t know today exactly what intelligence he may want in the future, so any selective collection method always leaves open the possibility that he may fail to collect what he will need. If instead he collects everything, he has effectively built himself a sort of time machine: in the future, he can go back and reprocess the data to find the intelligence he didn’t know he would need at the time it was collected.

Here is how the NRC committee puts it:

A key value of bulk collection is its record of past SIGINT that may be relevant to subsequent investigations. If past events become interesting in the present, because intelligence-gathering priorities change to include detection of new kinds of threats or because of new events such as the discovery that an individual is a terrorist, historical events and the context they provide will be available for analysis only if they were previously collected.

This is the basis of the committee’s conclusion that no software technique will fully substitute for bulk collection. The committee caveats this finding with sub-conclusions that address alternatives to bulk collection. But those alternatives are only a “partial substitute” because they can’t theoretically provide as much value as an intelligence time machine.

Put in this context, it should be clear what the problem is here. It doesn’t take a presidentially mandated committee full of distinguished technical experts to tell you that a time machine could be useful to have around. Of course, notionally, a time machine could be quite handy. Intelligence officials and surveillance reform advocates could all probably agree on that point. The committee’s conclusion — that a software technique that allows you to look into the past is theoretically better than software that doesn’t allow you to do so — is at once both true and trivial. The triviality makes it difficult to see how this report makes a meaningful contribution to debates about bulk collection. Furthermore, this conclusion based on a highly theoretical premise breaks down when applied to the realities of intelligence analysis.

The Superiority of Targeted Collection. There is good reason to believe that the ability to retrospectively mine bulk intelligence traffic isn’t as valuable as the NRC expects.

The two broadest NSA programs to have received public scrutiny are its bulk phone records program (Section 215) and Internet content collection program (Section 702). Several rigorous reviews (for example here, here, and here) have been conducted of the Section 215 program and have concluded that the program is not an effective counterterrorism tool. At the same time, the consensus has emerged that NSA’s Section 702 program — a targeted program, albeit one that does incidentally sweep up the communications of some non-targets— is vital to national security.

These conclusions are the opposite of what one would expect from the NRC’s analysis. The NRC was content to limit the scope of its review to “technical aspects of signals intelligence,” without regard to the practicalities of intelligence work or the details of any particular bulk collection program. It is therefore able to argue that the conclusions from other reviews of NSA’s programs “were policy and legal judgments that are not in conflict with the committee’s conclusion that there is no software technique that will fully substitute for bulk collection.” Nonetheless, it is hard to read those other reviews and not conclude that there is something missing from the NRC’s report.

One can’t generalize about bulk collection based on an analysis of just two actual intelligence programs. But experience fighting al-Qaeda over the last decade has shown there are diminishing returns to data collection. In many of the most significant counterterrorism failures over the last decade, problems stemmed from having too much information, not too little. Intelligence time machines don’t actually seem to work that well as intelligence tools. Bulk collection programs are inferior substitutes for targeted ones.

There are two fundamental reasons why this is the case.

First, intelligence analysis isn’t about looking back. It is about looking forward. This is the basic distinction that separates intelligence analysis from criminal investigations. Criminal investigations seek information about a crime that has taken place. Indeed, intelligence time machines are most valuable when our intelligence community has already failed, after a terrorist attack has occurred, when we are seeking to reconstruct the events that led to that attack.

Intelligence analysis involves anticipating threats, identifying the information needed to understand those threats, and then doing what is necessary to get that information. Intelligence professionals often talk about the intelligence life cycle: intelligence requirements drive collection operations, operations generate actual intelligence that is disseminated and analyzed, and analysts use that information to define new intelligence requirements. The intelligence life cycle entails an entirely different approach to intelligence collection than the one described by the NRC committee. The intelligence time machine tries to short circuit the cycle by seeking to satisfy intelligence requirements before those requirements have actually been defined. In so doing, it breaks the iterative process that allows for actual knowledge accumulation within the IC.

Second, intelligence is all about the details. Bulk collection programs inherently sacrifice precision and detail for breadth. This is true in two respects. It is the content of communications that is always the most valuable for intelligence purposes because it contains the details necessary to power sophisticated analysis. Bulk metadata programs will never cut it because metadata doesn’t include those details. Conversely, bulk content collection has its own set of problems. Because the content of communications is typically unstructured, it is more difficult to synthesize and surface the most important details to intelligence analysts. This leaves those details hidden in the haystack, never to be found unless the analyst already knows exactly what they are looking for. And again, analysts only know exactly what they are looking for at the point when the intelligence community has already failed.

Good intelligence work involves the accumulation of discrete pieces of information that one knows to be credible and significant. It is a process of building a pool of knowledge. This is what drives most intelligence successes. Conceptually, it does not fundamentally involve the whittling away of huge quantities data, the vast majority of which we know beforehand to be useless. And to the extent that analytic work requires whittling down vast amounts of useless data, this is actually a drawback that makes the job hard and increases the likelihood that important pieces of intelligence will be missed.

Consider the case of Najibullah Zazi, who was convicted of plotting an attack on the New York subway. Zazi was identified, and the plot was thwarted, after he sent an email to a contact in Pakistan stating, “the marriage is ready flour and oil.” The email was in code. The email address for the Pakistani associate was already on the NSA Section 702 target deck and associated with a known senior al-Qaeda member. This is what tipped off NSA, which then passed information to the FBI.

Now, imagine that NSA had implemented its Section 702 authorities as a bulk collection program. They could have, for example designed a system to collect all emails sent from someone in the United States to someone in Pakistan. That implementation would have guaranteed that NSA would collect the email Zazi sent to his associate in Pakistan. But the odds of any analyst actually finding that email would decrease significantly. The analyst would have to construct a search query, knowing beforehand that marriage, flour, and oil might be relevant terms. Even then the task of narrowing down the pool of relevant emails and identifying the one email that mattered would be daunting. For this reason, it seems likely that a bulk collection program would have resulted in a successful terrorist attack instead of a thwarted one. After that successful attack on the New York subway, the intelligence time machine would allow authorities to quickly search their Section 702 intelligence traffic and find Zazi’s email.

Conclusions. The NRC committee essentially concluded that intelligence time machines make sense in theory. But the committee reached that conclusion by ignoring the details of how intelligence analysis actually works. As in the business of intelligence, the details here should matter, and the reality is that intelligence time machines fail in practice. Bulk collection programs are less effective than targeted ones, a conclusion that runs counter to the NRC’s analysis. Further, bulk collection efforts make targeted programs less valuable by burying good intelligence within larger volumes of useless information. Reliance on intelligence time machines may thus hurt the IC more than they help, ensuring that analysts have the information they need to disrupt the next terrorist attack but that they only find that information after the attack has occurred.