Chernobyl’s lessons for critical-infrastructure cybersecurity
Credit to Author: Seth Rosenblatt| Date: Fri, 21 Jun 2019 15:30:00 +0000
This story originally ran on The Parallax on April 26, 2019.
CHERNOBYL EXCLUSION ZONE, Ukraine—The stray dog looking directly at me was hard to resist. Her ears perked up, her fur appeared clean—free of mange, at any rate—and she held a large stick firmly between her jaws.
She looked like a good dog. She wanted to be petted or, at least, have someone wrestle the stick from her—perhaps throw it so she could retrieve it. But because we met near Cafe Desyatka, in the woods of the Chernobyl Exclusion Zone, a 1,004-square-mile region slightly smaller than Yosemite National Park that surrounds the Chernobyl Nuclear Power Plant, looking but not touching was probably the safer choice. I was less worried about how irradiated she might be than about whether she had rabies.
According to a 2017 estimate by the United States–based nonprofit Clean Futures Fund, this dog was one of hundreds of strays living in the Exclusion Zone, near the power plant, and in the abandoned cities of Chernobyl and Pripyat, which once housed the nuclear facility’s employees. The organization offers aid to communities decimated by industrial accidents, and that includes caring for the Chernobyl dogs, many of whom die young due to malnourishment, disease, predators, harsh weather, a lack of shelter, and Chernobyl’s notorious environmental contamination.
Soviet soldiers forced the owners of the dogs’ ancestors to abandon them in the evacuation that followed the Chernobyl disaster, which began 33 years ago today. Chernobyl, the first nuclear-power plant in the Soviet Union, had been operational for nine years. At 1:23 a.m. on April 26, 1986, a series of facility tests initiated the prior day culminated in the fuel chamber getting overpressurized.
During the tests, which were designed to determine how long the turbines would spin after a loss of electricity, an operator decided to carry on with testing procedures amid signs that the reactor was malfunctioning. Two explosions, driven by super-heated steam, followed. And smoke from fires spread radioactive material—400 times the amount released at Hiroshima—far beyond Chernobyl.
Two Chernobyl workers were immediately killed by explosions. Three more died the following week, and by the end of July, radiation exposure had killed 28 people, including six firefighters, according to the World Nuclear Association’s assessment of the Chernobyl disaster, last updated in April 2018. At least 106 others received doses of radiation high enough to contract acute radiation sickness.
This was not the first disaster at Chernobyl: A partial meltdown of Chernobyl Reactor 1’s core occurred in 1982, but the event and its impact were concealed from the public.
The official Soviet response was to blame the disaster on facility workers who bungled the tests. This conclusion was supported by the International Atomic Energy Association in 1992. But Chernobyl’s deputy chief engineer, Anatoly Stepanovich Dyatlov, alleged in a 1992 interview that flaws in the reactor design were the root cause.
Dyatlov’s judgment is supported by a 2002 report for the National Academy of Sciences of Belarus, which investigated the reactor design and how it led to the disaster. It is also supported by a 2009 analysis by the World Nuclear Association:
“The accident at Chernobyl was the product of a lack of safety culture. The reactor design was poor, from the point of view of safety, and unforgiving for the operators, both of which provoked a dangerous operating state. The operators were not informed of this and were not aware that the test performed could have brought the reactor into an explosive condition. In addition, they did not comply with operational procedures. The combination of these factors provoked a nuclear accident of maximum severity in which the reactor was totally destroyed within a few seconds.”
Multiple human factors led to the disaster at Chernobyl, from the Soviet decision to use the flawed reactor system when building the facility in the 1970s to the operators’ actions that fateful hour in 1986. And experts believe that human errors, including a reliance on overly connected or outdated systems to monitor and run critical infrastructure, could be exploited to pose similar threats to today’s nuclear-power facilities.
It remains an open question whether the industry’s collective responses are enough to stave off even the most clever of nation-state cyberattacks. What’s clear is that cyberattacks against all critical-infrastructure operations are on the rise, and nuclear-power facilities are not exempt, according to a 2016 report by the nuclear-security nonprofit Nuclear Threat Initiative.
Heavy regulations in the nuclear-power industry extending to cybersecurity protocol make nuclear-power facilities more secure than other types of power facilities, cybersecurity experts say. The regulations, in part, stem from education about Chernobyl. But cyberattacks against US nuclear-power facilities in the 2000s helped push regulators to take a more active role.
Early lessons from the school of cyber-knocks
In 2003, the computer worm SQL Slammer tackled more than 75,000 computers around the globe in 10 minutes, including those at the Davis-Besse nuclear-power plant in Ohio. Slammer prevented employees from accessing the software needed to monitor system safety for about five hours.
Thanks to a computer backup system, combined with the reactor being offline for unrelated reasons, the Slammer attack didn’t result in any damage. Instead, says Michael Toecker, the founder of cybersecurity consultancy Context Industrial Security, “the industry took it as a wake-up call…Everybody in nuclear turned their heads and said, ‘What just happened?’”
There were at least two other cybersecurity incidents at US nuclear-power plants that decade—in 2006, at the Browns Ferry Nuclear Power Plant in Alabama, and in 2008, at the Hatch Nuclear Power Plant in Georgia—before the US Nuclear Regulatory Commission mandated that all nuclear-power facilities have a cybersecurity plan in 2009.
A year later, the NRC introduced the Regulatory Guide 5.71, a series of “best practices” it developed with input from the Department of Homeland Security, the Institute of Electrical and Electronics Engineers, the International Society of Automation, the National Institute of Standards and Technology, and the Nuclear Energy Institute. Each plant’s plan must hit eight milestones, which include creating a cybersecurity assessment team; identifying different cybersecurity levels and figuring out which devices need to be on each level; and protecting devices from portable media like thumb drives.
Threats such as Slammer “are probably equivalent” in scope to those at non-nuclear facilities, says F. Mitch McCrory, an internationally recognized cybersecurity expert who currently manages the Risk and Reliability Analysis Department in Sandia National Laboratories’ Energy, Earth, and Complex Systems Center. All power plants must be concerned with cyberattacks such as insider threats, software and hardware supply chain attacks, and hackers remotely accessing the network. But considering a disaster like Chernobyl, he says, nuclear-facility operators must “do cybersecurity better than other places.”
Layered security to stop hackers
In his 1990 book on nuclear energy, Bernard Leonard Cohen, a professor emeritus of physics at the University of Pittsburgh, wrote that “post-accident analyses indicate that if there had been a US-style containment in the Chernobyl plant, none of the radioactivity would have escaped, and there would have been no injuries or deaths.”
That was before critical-infrastructure hacks became common. Stricter regulations on US nuclear-power facilities implemented since then include a requirement that the networks connecting nuclear-power machinery be separate from the business side of the facility. Nuclear facilities are also required to use one-way data diodes to control the flow of information from monitoring devices to central computers controlling the plant.
“Digital instrumentation diodes prevent communication from the business network into the control system network. It mostly removes the threat vector of trying to hack into the system through the business network,” McCrory says, because there’s no way to send data back to its source—which is possible over traditionally used copper wiring. “And control system networks have digital-asset monitoring, to scan all devices for malware.”
There are procedural protections in place as well, Toecker says. Hardware and software updates must be tested on remote systems. Any software code written by a nuclear engineer for a nuclear facility must be reviewed by other engineers who have no relationship to the author. Software from an outside vendor must go through a source verification process, which can involve a hash check or software signature check, depending on the kind of software. Update installations must be documented. And depending on the type of update or the status of the facility, other precautionary procedures might come into play.
“We started doing in 2012,” Toecker says. “The rest of the power industry is only getting to that now.”
Cybersecurity holes still remain
Even after taking every imaginable precaution, no computerized system is ever hack-proof, and there is still plenty for the nuclear-power industry to improve on, says Bryan Owen, cybersecurity manager at enterprise data management software maker OSIsoft, whose software is used by nuclear facilities across the world to manage streams of monitoring data.
Top on his list is exporting US standards to other countries, not all of which have such strict regulations in place. In the United States, “regulators, standards organizations, government departments, customers, partners—they all rolled up their sleeves for years to develop the nuclear-cybersecurity regulations,” he says. “OSIsoft has almost 50 percent market share globally, and I’ve visited facilities all over the world. But I haven’t seen this replicated.”
Owen would also like available data on “near-misses,” when something goes wrong but doesn’t result in a significantly notable event. “This has proven to be a constructive approach in aviation safety,” he says, but remains “elusive” for critical infrastructure.
Nuclear-power facilities across the world also must contend with issues related to aging: Equipment can easily last “12 to 15 years,” Toecker says, but older equipment connected to networks can be harder to protect, as vendors discontinue product lines or go out of business.
In his 2016 report on cybersecurity and nuclear-power facilities, engineering and automation specialist Béla Lipták lays out the challenges in getting facility operators to protect customers.
“We also know that for financial reasons, and because of management convenience, the whole nuclear industry is drifting toward installing completely digital controls to allow the remote operation of some plant functions,” he writes. To protect consumers from hackers disrupting plant operations, he says the NRC must mandate “totally separating the corporate business networks from the plant networks, and to realize that digital firewalls do not guarantee this separation.”
That last bit of guidance is not clearly stated in US regulations of nuclear-power plants. And although data diodes may isolate the sensors that measure the plant’s functions from the plant’s operations network, industrial control system cybersecurity expert Joe Weiss says regulations should specify that sensors have on-board cybersecurity protections to stop data manipulation at the source. In most cases, he says, they don’t.
“If you can’t trust what you measure, you’re in pretty sad shape,” Weiss says. “If the sensor has remote communication capability, whether wired or wireless, it can be vulnerable. We made our systems accessible without truly understanding what that meant.”
Sensor vulnerability strikes at the core of Weiss’ concerns about nuclear power plant safety, especially as overwhelmed sensors made the Three Mile Island incident in 1979 much harder to stop.
“If what you’re measuring is not correct, your actions based on that are not going to be correct. If you’re a doctor, and you can’t trust your temperature or blood pressure reading, how can you make a good diagnosis?” he asks. “When you talk about sensors, that’s the heart of safety.”
High cost of failure
The Chernobyl plant, today covered in a 350-foot-high, 840-foot-wide metal-and-concrete sarcophagus, stands as a sober monument to the importance of prioritizing community security and safety.
For all of the damage the Chernobyl disaster wrought, it is rife with warning signs that anybody can go see for themselves of what can happen when politics or profits trumps safety protocol: the abandoned buildings of the former Chernobyl employee village of Pripyat; the exposure of a once-hidden missile-detecting radar array (view our photos of the Duga array above); and hundreds of sadly untouchable stray dogs.
In the years that followed the Chernobyl disaster, the NRC decided over a series of at least three reports that there were few safety improvements, if any, for the United States to make at its reactors. After participating in a select scientific panel analyzing what happened, the Nobel laureate and American physicist Hans Bethe told Richard Rhodes for his book on Chernobyl, “the Chernobyl disaster tells us about the deficiencies of the Soviet political and administrative system rather than about problems with nuclear power.”
Weiss remains concerned about the security of nuclear plants in the United States and abroad because the stakes are so high.
“This isn’t a data breach,” he argues. “This is blowing things up.”
The post Chernobyl’s lessons for critical-infrastructure cybersecurity appeared first on Malwarebytes Labs.