Yesterday, Bloomberg News reported that hackers, likely from Russia, caused a 2008 explosion on the Baku-Tbilisi-Ceyhan (BTC) oil pipeline in Turkey. According to Bloomberg, the BTC pipeline attack “Opened [a] New Cyberwar Era,” two years before the Stuxnet worm derailed Iranian nuclear centrifuges. The report is significant because it moves back the timeline for alleged state-sponsored cyber attacks that caused destruction in the physical world. (I use “attack” throughout this post in the colloquial sense, without reference to whether an “attack” is an “armed attack” for purposes of international law.)
But the pipeline explosion report also highlights another important issue. It took six years for the explosion to be publicly revealed as a cyber attack, and confusion about whether an incident is an accident or a cyber attack may be a common problem going forward. Although lot of attention focuses on cybersecurity attribution as a question of who carried out an intrusion, the BTC explosion exemplifies an analytically prior attribution question: what caused an incident, a cyber attack or a simple malfunction?
Confusion about and delay in resolving the “what” attribution question provides an extra layer of insulation between states and international responsibility for their actions. Both the attack and the attacker must be identified before international law can be applied to the attacking and victim states. International lawyers can analyze the application of law to facts, but only if they first know the facts.
Sometimes the “what” attribution question is easy, and sometimes it is not. The (thankfully-still-relatively-short) list of destructive cyber actions provides examples of both.
On the easy side, some cyber incidents are readily identified. In August 2012 incidents believed to be the work of Iran, the Shamoon virus wiped data and rendered inoperable more than 30,000 computers at Saudi Aramco and a Qatari company, Ras Gas. The attack on Aramco replaced data “with an image of a burning American flag.” Similarly, the recent hack of Sony Pictures involved images displayed on company computers of “a neon red skull and a message proclaiming that the company had been hacked by ‘#GOP,’ said to stand for ‘Guardians of Peace,’” as well as the public release of forthcoming movies and internal company data. In these instances, the fact of a cyber intrusion was clear, even if the attribution to a particular source was not immediately evident. (Debate continues over whether North Korea was responsible for the Sony hack.)
On the other hand, the BTC attack and the Stuxnet worm used against Iran show that recognizing a cyber attack can be more difficult—sometimes by design. In 2012, David Sanger reported on the confusion that early iterations of Stuxnet triggered in Iranian nuclear facilities:
The first attacks were small, and when the centrifuges began spinning out of control in 2008, the Iranians were mystified about the cause, according to intercepts that the United States later picked up. “The thinking was that the Iranians would blame bad parts, or bad engineering, or just incompetence,” one of the architects of the early attack said.
The Iranians were confused partly because no two attacks were exactly alike. Moreover, the code would lurk inside the plant for weeks, recording normal operations; when it attacked, it sent signals to the Natanz control room indicating that everything downstairs was operating normally. “This may have been the most brilliant part of the code,” one American official said.
Later, word circulated through the International Atomic Energy Agency, the Vienna-based nuclear watchdog, that the Iranians had grown so distrustful of their own instruments that they had assigned people to sit in the plant and radio back what they saw.
“The intent was that the failures should make them feel they were stupid, which is what happened,” the participant in the attacks said. When a few centrifuges failed, the Iranians would close down whole “stands” that linked 164 machines, looking for signs of sabotage in all of them. “They overreacted,” one official said. “We soon discovered they fired people.”
Imagery recovered by nuclear inspectors from cameras at Natanz — which the nuclear agency uses to keep track of what happens between visits — showed the results. There was some evidence of wreckage, but it was clear that the Iranians had also carted away centrifuges that had previously appeared to be working well.
The Bloomberg report suggests that there was similar confusion—at least for an unspecified period—about the cause of the BTC explosion: the Turkish government “blamed a malfunction,” and BP, the majority owner of the pipeline, noted in its annual report that the pipeline was shutdown because of a fire. Only after further investigations into why cameras and other automated monitoring systems did not alert pipeline staff that something was amiss did the cyber attack component of the explosion become clear, apparently because of a slip up by the perpetrators. As Bloomberg explains:
Although as many as 60 hours of surveillance video were erased by the hackers, a single infrared camera not connected to the same network captured images of two men with laptop computers walking near the pipeline days before the explosion, according to one of the people, who has reviewed the video. The men wore black military-style uniforms without insignias, similar to the garb worn by special forces troops.
Bloomberg then notes, “[i]nvestigators compared the time-stamp on the infrared image of the two people with laptops to data logs that showed the computer system had been probed by an outsider. It was an exact match, according to the people familiar with the investigation.”
The difficulty of the “what” attribution question—determining whether an incident is a simple malfunction or a cyber attack—creates several potentially problematic consequences.
- Increasing fear. Any future pipeline explosion will be considered a potential attack, much like any future blackout, dam malfunction, or sudden stock market dive. And they should be.
- Creating exploitable confusion. The ambiguity about whether an incident is a malfunction or a cyber attack may allow states to get away with aggressive actions that they could not undertake through conventional means without provoking a response. This may allow states to avoid conventional conflicts in some circumstances, but in others, it may just delay conflict or retribution until the attack is revealed and prevent assessment of whether states’ actions were lawful.
- Encouraging cyber attacks. If states perceive that cyber actions will be recognized only after a delay or not at all and that (in part because of the delayed recognition) the consequences for the attacking state are minimal, they may be more likely to undertake aggressive actions in the first place or in retaliation for attacks they have sustained.
At this point, mitigating these consequences is a technical issue, not a legal one. Mitigation requires faster recognition of cyber attacks as cyber attacks—and conversely, faster identification of malfunctions as malfunctions. With the proliferation of private cybersecurity firms with substantial forensic capabilities, in addition to the capacities of government investigators, the six years it took to publicize the cause of the BTC pipeline explosion may be the high-water mark for delays in identifying and publicizing cyber attacks going forward.