With high-profile hacks in the headlines and government officials trying to reopen a long-settled debate about encryption, information security has become a mainstream issue. But we feel that one element of digital security hasn’t received enough critical attention: the role of government in acquiring and exploiting vulnerabilities and hacking for law enforcement and intelligence purposes. That’s why the Electronic Frontier Foundation (EFF) recently published some thoughts on a positive agenda for reform of how the government, obtains, creates, and uses vulnerabilities in our systems for a variety of purposes, from overseas espionage and cyberwarfare, to domestic law enforcement investigations.
Some influential commentators like frequent Lawfare contributor Dave Aitel have questioned whether we at EFF should be advocating for these changes, because pursuing any controls on how the government uses exploits would be “getting ahead of the technology.” But anyone who follows our work should know we don’t call for new laws lightly.
To be clear: We are emphatically not calling for regulation of security research or exploit sales. Indeed, it’s hard to imagine how any such regulation would pass constitutional scrutiny. We are calling for a conversation around how the government uses that technology. We’re fans of transparency; we think technology policy should be subject to broad public debate heavily informed by the views of technical experts. The agenda EFF outlined calls for exactly that.
There’s reason to doubt anyone who claims that it’s too soon to get this process started.
Consider the status quo: The FBI and other agencies have been hacking suspects for at least 15 years without real, public, and enforceable limits. Courts have applied an incredible variety of ad hoc rules around law enforcement’s exploitation of vulnerabilities, with some going so far as to claim that no process at all is required. Similarly, the government’s (semi-)formal policy for acquisition and retention of vulnerabilities—the Vulnerabilities Equities Process (VEP)—was apparently motivated in part by public scrutiny of Stuxnet (widely thought to have been developed at least in part by the US government) and the long history of exploiting vulnerabilities in its mission to disrupt Iran’s nuclear program. Of course, the VEP sat dormant and unused for years until after the Heartbleed disclosure. Even today, the public has seen the policy in redacted form only thanks to FOIA litigation by EFF.
The status quo is unacceptable.
If the Snowden revelations taught us anything, it’s that the government is in little danger of letting law hamstring its opportunistic use of technology. Nor is the executive branch shy about asking Congress for more leeway when hard-pressed. That’s how we got the Patriot Act and the FISA Amendments Act, not to mention the impending changes to Federal Rule of Criminal Procedure 41 and the endless encryption “debate.” The notable and instructive exception is the USA Freedom Act, the first statute substantively limiting the NSA’s power in decades, born out of public consternation over NSA mass surveillance.
So let’s look at some of the arguments for not pursuing limits on the government’s use of particular technologies here.
On vulnerabilities, the question is whether the US should have any sort of comprehensive, legally mandated policy requiring disclosure in some cases where the government finds, acquires, creates or uses vulnerabilities affecting the computer networks we all rely on. That is, should we take a position on whether it is beneficial for the government to disclose vulnerabilities to those in the security industry responsible for keeping us safe?
In one sense, this is a strange question to be asking, since the government says it already has a considered position, as described by White House Cybersecurity Coordinator, Michael Daniel: “[I]n the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” Other knowledgeable insiders—from former National Security Council Cybersecurity Directors Ari Schwartz and Rob Knake to President Obama’s hand-picked Review Group on Intelligence and Communications Technologies—have also endorsed clear, public rules favoring disclosure.
But Aitel says all those officials are wrong. He argues that we as outsiders have no evidence that disclosure increases security. To the contrary, Aitel says it’s a “fundamental misstatement” and a “falsehood” that vulnerabilities exploited by the government might overlap with vulnerabilities used by bad actors. “In reality,” he writes, “the vulnerabilities used by the US government are almost never discovered or used by anyone else.”
If Aitel has some data to back up his “reality,” he doesn’t share it. And indeed, in the past, Aitel himself has written that “bugs are often related, and the knowledge that a bug exists can lead [attackers] to find different bugs in the same code or similar bugs in other products.” This suggests that coordinated disclosure by the government to affected vendors wouldn’t just patch the particular vulnerabilities being exploited, but rather would help them shore up the security of our systems in new, important, and possibly unexpected ways. We already know, in non-intelligence contexts, that “bug collision,” while perhaps not common, is certainly a reality. We see no reason, and commentators like Aitel have pointed to none, that exploits developed or purchased by the government wouldn’t be subject to the same kinds of collision.
In addition, others with knowledge of the equities process, like Knake and Schwartz, are very much concerned about the risk of these vulnerabilities falling into the hands of groups “working against the national security interest of the United States.” Rather than sit back and wait for that eventuality—which Aitel dismisses without showing his work—we agree with Daniel, Knake and Schwartz and many others that the VEP needs to put defense ahead of offensive.
Democratic oversight won’t happen in the shadows
Above all, we can’t have the debate all sides claim to want without a shared set of data. And if outside experts are precluded from participation because they don’t have a TS/SCI clearance, then democratic oversight of the intelligence community doesn’t stand much chance.
On its face, the claim that vulnerabilities used by the US are in no danger of being used by others seems particularly weak when combined with the industry’s opposition to “exclusives,” clauses accompanying exploit purchase agreements giving the US exclusive rights to their use. In a piece last month, Aitel’s Lawfare colleague Susan Hennessey laid out her opposition to any such requirements. But we know for instance that the NSA buys vulnerabilities from the prolific French broker/dealer Vupen. Without any promises of exclusivity from sellers like Vupen, it’s implausible for Aitel to claim that exploits the US purchases will “almost never” fall into others’ hands.
Suggesting that no one else will happen onto exploits used by the US government seems overconfident at best, given that collisions of vulnerability disclosure are well-documented in the wild. And if disclosing vulnerabilities will truly burn “techniques” and expose “sensitive intelligence operations,” that seems like a good argument for formally weighing the equities on both sides on an individualized basis, as we advocate.
In short, we’re open to data suggesting we’re wrong about the substance of the policy, but we’re not going to let Dave Aitel tell us to ‘slow our roll.’ (No disrespect, Dave.)
Our policy proposal draws on familiar levers—public reports and congressional oversight. Even those who say that the government’s vulnerability disclosure works fine as is, like Hennessey, have to acknowledge that there’s too much secrecy. EFF shouldn’t have had to sue to see the VEP in the first place, and we shouldn’t still be in the dark about certain details of the process. As recently as last year, the DOJ claimed under oath that merely admitting that the US has “offensive” cyber capabilities would endanger national security. Raising the same argument about simply providing insight into that process is just as unpersuasive to us. If the government truly does weigh the equities and disclose the vast majority of vulnerabilities, we should have some way of seeing its criteria and verifying the outcome, even if the actual deliberations over particular bugs remain classified.
Meanwhile, the arguments against putting limits on government use of exploits and malware—what we referred to as a “Title III for hacking”—bear even less scrutiny.
The FBI’s use of malware raises serious constitutional and legal questions, and the warrant issued in the widely publicized Playpen case arguably violates both the Fourth Amendment and Rule 41. Further problems arose at the trial stage in one Playpen prosecution when the government refused to disclose all evidence material to the defense, because it “derivatively classified” the exploits used by the FBI. The government would apparently prefer dismissal of prosecutions to disclosure, under court-supervised seal, of exploits that would reveal intelligence sources and methods, even indirectly. Thus, even where exploits are widely used for law enforcement, the government’s policy appears to be driven by the Defense Department, not the Justice Department. That ordering of priorities is incompatible with prosecuting serious crimes like child pornography. Hence, those that ask us to slow down should recognize that the alternative to a Title III for hacking is actually a series of court rulings putting a stop to the government’s use of such exploits.
Adapting Title III to hacking is also a case where public debate should inform the legislative process. We’re not worried about law enforcement and the intelligence community advocating for their vision of how technology should be used. But given calls to slow down, however, we are very concerned that there be input from the public, especially technology experts charged with defending our systems—not just exploit developers with Top Secret clearances.