The Department of Justice announced recently that the FBI had unilaterally removed malicious web shells from hundreds of private systems. These shells were the remnants of a major security problem that emerged earlier in March in Microsoft Exchange Server software. Hackers linked to the Chinese government exploited at least four zero-day vulnerabilities in Microsoft’s code that allowed remote access to sensitive data. The web shells were left behind to facilitate later exploitation of the infected systems. The White House and Microsoft urged the machine owners to patch the various underlying vulnerabilities and to remove the web shells, but not everyone did.
On Friday, April 9, the FBI secretly asked a federal magistrate judge in Texas to issue a warrant allowing the Bureau, without prior notice, to access, copy, and remove the web shells from “hundreds of vulnerable computers in the United States running on-premises versions of Microsoft Exchange Server software used to provide enterprise-level e-mail service.” The next Tuesday, April 13, DOJ issued a press release announcing that the operation had been completed. The FBI’s attempt to fix these systems appears to have been successful, although an accurate and detailed result summary for this hack-to-patch campaign is not available. Much of the punditry has been favorable: The action was “bold and innovative” and a “practical response to a serious problem.” And the positive aspects of this sort of government intervention are obvious: “Hacks to patch” can close vulnerabilities, reduce cyber risk, and provide assistance from experts to organizations that might lack the capability to protect their own systems.
Here, we make the counterargument – namely, that the FBI’s hack-to-patch approach is a harmful practice at the enterprise level and that a dangerous precedent has been set. There are many negative technical, security, and policy consequences to the hack-to-patch approach. Moreover, we believe the proffered justifications for this particular government intervention are slight, which leads us to fear more ambitious hack-to-patch operations in the future. From an information security perspective, this is a troubling prospect.
Hack-to-Patch has Serious Information Security Flaws at the Enterprise Level
To many, the idea that the FBI would undertake a patching operation is surprising. At the individual user level, however, we were warned these efforts were in the offing. Specifically, the FBI sought a “botnet warrant” under a provision of the Rules of Criminal Procedure that became effective at the end of 2016. Although the new provision does not actually mention botnets, the Justice Department itself came up with the sobriquet and the only example they provided for its adoption was the need to allow a single judge to oversee a botnet investigation that would likely be nationwide. DOJ also made clear that it might use its new “seizure” authority to take “needed action to liberate computers infected with malware.”
Whatever the efficacy of “liberating” the machines of “thousands of Americans” enduring a botnet attack, the FBI’s recent Outlook operation was at an entirely different level. The infected Outlook software is an “enterprise level” product used by businesses to run their own instances of Outlook email. Therefore, the FBI’s first hack-to-patch foray into the enterprise ecosystem needs to be judged by the information security practices applicable to businesses, not individuals. While it is tempting to highly skilled security teams to try benevolently to fix other organizations’ security issues, the effort produces negative information security consequences. There are two principal areas of concern. First, patching by exploit contravenes basic enterprise security practices. Second, the risk of collateral damage at the enterprise level is too great.
The Problems with Secret Patching.
Although we do not know how the FBI accessed the machines covered by the warrant – the information remains redacted in the unsealed application – we can assume that it exploited some vulnerability to do so. Regardless of the FBI’s motivations, this is a hack and one that will only confuse modern enterprise cybersecurity defenses. For instance, it is common today for modern cyber defenses to deploy machine learning tools that collect data about ongoing activity. These smart systems are designed to match observed behavior with learning models to try to determine what is occurring. Readers might be familiar with advanced computer vision tools that can make sense of photos (e.g., recognize pictures of cats). Machine learning tools do much the same thing with malicious breaches.
When external entities engage in vulnerability-exploitation activity that is largely consistent with hacking behavior, they confuse these learning tools. That is, the classification of any observed law enforcement hack-to-patch methods will likely result in a positive match for a malicious attack. Yes – this might be tweaked, but it introduces a new complication to an already complex defensive task. For this reason, any security team running a bug bounty or crowdsourced security-testing infrastructure already understands that the appropriate security teams must be given advance notice of any live testing to avoid training problems with machine learning tools. By the same token, law enforcement should not endanger the integrity of deployed machine learning methods through direct vulnerability exploitation.
Hack-to-patch can also confuse the target’s human security teams. The live exploitation of vulnerabilities by benevolent groups is the basis for modern penetration testing services. The theory is that by unleashing a team of so-called white hat hackers at a target, the likelihood increases that exploitable holes will be detected in advance of malicious hackers. This process has been successful enough that it has expanded to related services such as breach and attack simulation and bug bounty services, where third parties are paid to identify and report to the target system vulnerabilities subject to strict guidelines.
Practitioners have been careful, however, to define and enforce clear boundaries of these testing services. They must never be destructive, for example, and – as the National Institute of Standards and Technology suggests – they must be performed with the security team’s knowledge that such testing is on-going. Certainly, the specifics of a given test will not be known in advance, but security teams must understand that evidence of a probe might be coming from a benevolent test source. If advance notice is not provided, expensive and timely incident response activities might be initiated. Consider, for example, that the FBI hack-to-patch work might have easily tripped some threshold indicator that in turn might have caused the security team to become engaged with response. This would not be a “false” alarm as the company’s security perimeter was in fact breached, but galvanizing an entity’s security team in order to patch a system is an unacceptable cost of the operation
The Risk of Collateral Damage
Perhaps the most compelling reason law enforcement should not be performing hack-to-patch activities is the high potential for unintended collateral damage. Whenever live testing is performed on production systems, practitioners know that most of the work involves providing absolute assurance that no outages, degradations, performance issues, or leaks can occur. Such assurance can never be 100 percent effective, but testers are obliged to establish this as their goal. But this collaboration between the benevolent hacker and the system owners is of course missing in a secret hack like the FBI’s web shell effort.
The collateral damage can be significant. Consider, for example, that when the Blaster worm of 2003 ravaged systems all over the Internet, a second worm was developed, later called Nachi, that was intended to find infected systems and patch them on the Microsoft update site. This early attempt at a hack-to-patch ultimately caused more harm than good. The ping (ICMP) traffic used to find infected systems created a more intense worm condition that caused more damage than Blaster. Even a more passive approach, where law enforcement scans for the vulnerability, for example, and later notifies, carries the same collateral risk seen with the Nachi worm.
In its warrant request, the FBI assured the magistrate judge that there was no risk in the operation. In an “internal FBI test process,” the delete command successfully removed the web shell and “did not impact other files or services of the computer.” And an “outside expert” was given a “briefing” to “ensure the code would not adversely affect the victim computers and Microsoft Exchange Server software running on such computers.” We are skeptical that this sort of process is sufficient to avoid collateral risk, particularly if the number of machines the FBI might in the future attempt to patch grows beyond a few hundred, as in this case, or if the patching mission becomes more ambitious.
We are not unmindful of the arguments grounded in necessity that may be made to justify the FBI’s efforts: Absent hack-to-patch, as the FBI told the magistrate, these machines would likely remain vulnerable to exploitation “because the web shells are difficult to find due to their unique file names and paths or because these victims lack the technical ability to remove them on their own.” And that stealth was required because prior notice “would likely seriously jeopardize the ongoing investigation.” Still, we are skeptical on both counts.
Taking extraordinary steps to hack these private machines to rid them of their web shells did not render them secure. To the contrary, we know that these machines remain vulnerable despite the FBI’s hack because the DOJ clearly stated that the operation “did not patch any Microsoft Exchange Server zero-day vulnerabilities or search for or remove any additional malware or hacking tools that hacking groups may have placed on victim networks by exploiting the web shells.” Given our stance on hack-to-patch, we of course applaud the FBI’s restraint, but if the end game was the security of these computers, this was a small victory.
Similarly, the “endanger the investigation” explanation for failing to attempt to alert the machine owners prior to the exercise is weak. By the time the FBI sought permission for the operation there was little secrecy surrounding the Exchange hack. The machines still burdened with web shells were apparently identifiable by a “public scan” according to the FBI’s affidavit. Still, the FBI worried that prior notice would get back to the bad guys, who might “make changes to the web shells before FBI personnel can act . . . which would enable persistent access, further exploitation of the victims, and defeat the efforts of FBI personnel to identify victims and delete web shells.“ These problems would arise, the FBI said, if notice were given to the “public at large” or to “individual users” of compromised Exchange servers.
Whatever “danger to the investigation” might come from giving notice to large groups of people with no information security relationship to the compromised machines, this is a disingenuous argument for not giving notice to the machine owners. The applicable notice requirement requires the FBI to provide a copy of the warrant to “the person whose property was searched or who possessed the information that was seized or copied,” not the public at large or enterprise email users. The FBI made no effort to show that giving advance notice to the machine owners would give rise to any danger to the “investigation.”
The FBI could have alerted the owners and asked them to take the necessary steps to remove the web shells, and perhaps gone one better by suggesting the owner use Microsoft’s “one-click” patch to the underlying Exchange vulnerabilities. Indeed, in one of the few previously disclosed uses of a “botnet warrant,” the FBI used its authority to deploy machines appearing to be infected with the peer-to-peer Joanap botnet in order to identify “numerous unprotected computers that hosted the malware underlying the botnet.” While the FBI kept its mapping operation secret for nearly six months, its subsequent “effort to eradicate” the botnet involved notifying the owners of still-infected machines so that they could take appropriate steps.
Summary
We believe that hack-to-patch initiatives, such as the FBI’s operation against Microsoft Exchange Server web shells, should be considered harmful, not beneficial. While this hack-to-patch effort was of modest scope, we worry that the government’s ambitions in this area will only grow. Efforts to continue this type of activity for future vulnerabilities will exacerbate the technical, security, and policy issues we have noted. Fighting capable adversaries is already tough. Mixing benign hacking into the mix will make it tougher. In the end, law enforcement should recognize that its role is not system administration, but the maintenance of public safety.