The world’s most important facilities—think massive hydroelectric dams and nuclear power plants—are vulnerable to devastating cyberattacks. And it may be just a matter of time before someone gets hurt.
But nobody panic.
That’s the overwhelming takeaway from new research set to be unveiled at the Black Hat cybersecurity conference in Las Vegas next week. The researchers have already gained the attention of major industries, but it remains unclear whether they will fix the problem before it’s too late.
The vulnerabilities can lead to events reminiscent of the 2010 Stuxnet attack on Iranian nuclear facilities or the 2014 cyberattack on a German steel mill. These attacks were the first time purely digital weapons caused physical damage to their targets. Stuxnet shut down a wide swath of Iran’s nuclear facilities, while the 2014 attack caused “massive” damage in the German facilities when the factory owners were unable to shut down a blast furnace.
“Anything that the facility is capable of in its natural operating system, you’re [an attacker] capable of doing—and doing damage with if you control the network,” Robert Lee, a security researcher and active-duty U.S. Air Force Cyber Warfare Operations Officer, told the Kernel.
“There is a massive lack of security awareness in the industrial control systems community.”
“With a power station, you can have major repercussions. With a hydroelectric dam, if you don’t monitor processes in a normal situation, it’ll spin out of control. Everything you have can be manipulated.”
Funded by security firm IOActive, the research came out of a collaboration between Lee, risk researcher Éireann Leverett, and IOActive security consultant Colin Cassidy, who sought to find and study IES vulnerabilities. The group has been working with at least four industrial switch vendors—Siemens, General Electric, Opengear, and Garrettcom—to disclose and fix, as best they can, the exploits in question. Except for Opengear, the vendors did not yet respond to a request for comment.
While these companies are working to fix the problem, the actual process of patching the switches can take several years and piles of money to accomplish, leaving large numbers of industrial facilities open to attacks on their network today. A major part of the researchers’ work has been developing mitigation techniques to defend against IES attackers even before a patch is implemented.
The vulnerabilities of industrial switches covered in the new research include the widespread use of default passwords, hard-coded encryption keys, and a lack of proper authentication for firmware updates. These three fundamental security failures in combination make it easier for attackers to gain access to industry devices and networks, change what they please, and take control.
In some cases, remote attackers are able to download all the diagnostics and configurations to which only administrators should have access. Forged session IDs, cross-site scripting, and cross-site requests allow a hacker to do things like surreptitiously create a brand new administrator user account on the switch as a backdoor to control the network.
Backdoors also exist in the form of hidden accounts originally created for maintenance that can provide cover for attackers. In particularly insecure facilities, antiquated and unencrypted connections to the Internet that allow engineers remote access to their networks act as pathways an attacker anywhere in the world can take toward a network.
“All these vulnerabilities are pervasive and endemic,” Leverett explained. “Most vendors haven’t done the basics.”
The vendors themselves explain the problems as a result of outdated thinking and technology.
“A lot of it comes down to the mindset when this [equipment] was installed four or eight years ago,” Robert Waldie, an Opengear engineer who continues to liaison with the researchers, said in a phone interview. “There was a lot less emphasis on hardening and ongoing auditing on this kind of equipment. They were just part of the background infrastructure. But now there’s a realization that these are significant entry points into critical infrastructure. Anecdotally, I think there’s a lot of vulnerable equipment out there.”
Moreover, these vulnerabilities are not an exhaustive list.
“Everything they look for, they find,” Lee said, describing the research efforts of his colleagues. “The specific vulnerabilities get attention; but the fact is, whenever Éireann and Colin look for something, they find it. It doesn’t take them as long as it should to find these things. A lot of these are low-hanging fruit that have huge impacts.”
Incredibly, it can take up to three years to fully fix any given problem. The process is slow and costly.
First, the researchers notify the federal government or another third party of the vulnerability, and then the vendors themselves. It can take switch vendors like General Electric or Siemens anywhere from three to eight months to issue patches that sometimes only fix a portion of the problem. While the patch then exists, research shows that industrial facilities don’t actually implement the patches for up to 18 months afterward.
Implementing a patch on critical hardware like an IES involves jumping through numerous hoops with management and then bringing the entire network down, which can cost thousands or millions of dollars every hour, according to the researchers. For that reason, many of these facilities are almost never patching more than once a year, and the timespan is often much longer.
“It’s been a mixed jar in terms of speed,” Waldie said. “A lot of industrial systems are very slow-moving, and paradoxically, the data center is quite a slow-moving place in terms of adoption for new software and things like that, especially for embedded-type systems. What we’ve found is the best approach is to disclose as quickly as possible for severe issues and to include mitigation instructions.”
“With a hydroelectric dam, if you don’t monitor processes in a normal situation, it’ll spin out of control. Everything you have can be manipulated.”
But even before a patch, there’s plenty to do to defend against attacks.
“In some cases, mitigations are in place if it’s a configuration item that’s not correctly set,” Cassidy told the Kernel. “The most secure deployment [of switches] isn’t always the default configuration, and vendors are switching.”
Changing default configuration choices means giving up some backward compatibility, a difficult choice made easier as research reveals how potent a cyberattack can be.
Industry system administrators must “look at all the user accounts on switches and disable the ones not being used,” Cassidy explained. They must also “swap out all the hard-coded encryption keys [and] change default passwords.”
Network monitoring is a huge blind spot, according to the researchers. Standard intrusion detection systems keyed to the facility’s user accounts and maintenance can alert a defender, for instance, whenever an unauthorized account is uploading hacked firmware to the switch in order to take control.
“Updating firmware without authentication usually originates from a computer that shouldn’t be accessing the switch anyway in the middle of the day, plus [there’s] a massive flow of data that shouldn’t be there either,” Lee said. “Monitoring can stop that.”
But to really fix the problem, the researchers insist, a culture change is required.
“People look for magic bullets—and there are discussions to be had, like [ones about] smarter network monitoring—but with so many things, we already know how to fix them, we already have the settings,” Lee said. “What we don’t have is awareness. There is a massive lack of security awareness in the industrial control systems community.”
The presentation on these vulnerabilities is expected to grab so much of the spotlight at Black Hat that a full press conference is being planned around it.
The trick is that while most hackers are worried about gaining administrative privileges on their targets, almost anyone on an industrial control system network already has those privileges. Often merely gaining access puts an attacker in a position to do damage, the researchers say. That’s why insecure connections to the Internet by switches, for instance, were described as so “unfortunate” by researchers.
These devices can be found everywhere: in electrical facilities, food processing plants, manufacturing plants, and onboard ships, transportation facilities, and more. Even the researchers are surprised with how ubiquitous these devices are. They described walking into facilities and, on multiple occasions, discovering vulnerable network switches they didn’t expect to find.
“The downside is that, because of some very strict policies, they generally sense that their security is better than it actually is and they don’t have to worry about these issues,” Lee said. “That’s simply not true.”
While the facilities themselves are big on security measures that directly protect them, the researchers often found paths from the enterprise-side of the business into the industrial networks.
“All these vulnerabilities are pervasive and endemic. Most vendors haven’t done the basics.”
“If we look at the type of adversary that would use these vulnerabilities, these are nation-states,” Lee said. “The motive isn’t there for crimeware and hacktivists. If you’re looking to control the entirety of a network for massive espionage or prepare for actual damage; if you’re looking at who targets nuclear centers, you’re generally one of the big players: Russia, China, Iran. Those targets are very interesting to you, to have access to an adversary’s center of gravity.
“The getting-in is some other research, but what they do while they’re in there are these vulnerabilities,” Lee added.
The game isn’t over once an attacker is inside a target’s network. It takes a lot of effort to remotely and stealthily map out the intricacies of an industrial network that are often uniquely designed. The first step to all of that is to gain network access through the switch.
The fact that it’s so tricky to operate in an industrial network means more familiar defenders have an advantage over attackers, one that ought to be multiplied with smart monitoring of data, the researchers said.
When Stuxnet was used to attack Iran, for example, one tactic was to disable the industrial-monitoring systems so that when sabotage occurred, the computers couldn’t alert anyone. From the inside, everything looked normal. The researchers repeatedly pointed out that gaining control of a network can yield that kind of result.
“Look at what happened in the San Bruno pipeline explosion,” Lee said, referring to a 2010 industrial disaster that killed eight people in California.
“When you look at the normal operating conditions in control systems like a pipeline, you can reach situations that will cause the loss of human life. Without even trying to do anything malicious to the process itself, an attacker can be working while alarms are coming back saying that we’re about to reach max pressure, and those packets can be dropped, that traffic denied. Not only can it lead to the loss of human life and physical damage, it can lead to immense difficulties in sending emergency response going to the right location because the monitor’s data is compromised.”
In the midst of announcing critical vulnerabilities, the researchers say they are wary of the hysteria that too often takes hold in media about cybersecurity news of this magnitude. While the consequences of these vulnerabilities are very big, they’re not without limits. A cascading power grid failure, a regularly featured nightmare in cyberwar news, is outside of the realm of possibility here.
Lee has devoted significant work to debunking hysteria and myths around cyberwar. Earlier this year, when news of Iran’s supposed cyberwar against the United States hit the New York Times, Lee made a detailed argument about why those claims were greatly exaggerated.
“We need less fear in the [security] community,” Lee said. “I’m concerned about people saying, ‘This is how Iran takes down the power grid.’ That’s just not the case.”
A version of this story was originally published on the Daily Dot July 29, 2015.
Photo via J Brew/Flickr (CC BY 2.0) | Remix by Jason Reed