As the government deals with what might be the worst cybersecurity breach ever, one federal expert says agency cyber planning misses an important practice. National Institute of Standards and Technology Fellow Ron Ross joined the Federal Drive with Tom Temin to discuss what should happen next.
Tom Temin: Ron, good to have you back and tell us exactly why you don’t think this is all that surprising.
Ron Ross: Good morning, Tom. It’s great to be with you even though it’s under less than ideal circumstances today. To me, [I’m] not surprised because I think from a computer science perspective, computer engineering perspective, we know that you can do almost anything with a computer today. I mean, one of the things that has been astounding is the technology advancements that have occurred over the last couple of decades, and especially the last decade. I was remembering a story – our grandson was born in May, and we haven’t had the opportunity to see him yet because of the COVID situation. But we do these video calls every week, and they’re fantastic. And the technology – it’s another example of the tremendous technology that we all share today. And I think in some sense, it’s given us, we’re kind of addicted to it and it’s given us this great blind spot. But at the end of the day, it’s all about complex systems. And you can do anything you want with a computer with software and firmware that drive those computers. It’s a force for good and it can also be a force for evil. So when I see all these attacks that occur, and they keep on happening over and over again, I’m really not surprised. And I think this is really what prompted the two articles that I wrote.
Tom Temin: And one of the articles cites the idea that nowhere has Systems Security engineering – your term – been part, or at least enough of a part of the thinking on behalf of agencies, and people and vendors designing systems. And you’ll also note that recent high level reports such as those from the Cyber Solarium Commission appointed by Congress, have not discussed system security engineering, which you feel is a major missing ingredient. Tell us more about that?
Ron Ross: Well, it’s a difficult situation, because you know, I’m always a glass-half-full kind of guy. And I’m not one that looks and says “I told you so” because this is a shared responsibility. I think we’re all in this together. And when I was reading the reports, I think there are different definitions of systems security engineering, and course, my view of SSC – systems security engineering – is really focused on one of our NIST special publications: 801-60, Vol. 1. It characterizes the lifecycle and everything that goes into what we call system security engineering today. And to me, what it really boils down to is that security engineers look at a system from a very different perspective. I’ve used this term, “above-the-waterline” security and “below-the-waterline,” where above-the-waterline represents all those things that we do and enterprises that we can see and we can touch and we can impact. Below the waterline, there’s a whole world down there of complexity: Complex systems, firmware, software, hardware componentsm – all coming together with that capability that we call a system. And I think a lot of times, we don’t spend enough time below the waterline looking at what system security engineers look at. They look at every component that’s in a system, not just what’s in the component and how it operates, but how those components interact with each other. And the information flows amongst the components that characterize that system. And moreover, there’s actually a great picture in our NIST special pub 801-60, on page 13, that shows what a system looks like. And it’s really, I think, we’re going to have to redefine our notion of a system. This is more of a system of systems problem, the echo system, which brings in the supply chain. So we’re really not just looking at your system and your enterprise, we’re looking at everything that comes into your enterprise with respect to the different components, commercial products, and then you start to peel that onion back and apply those same systems security engineering principles to the developers, the producers of the technology. So they have to protect their systems and networks, but they also have to protect the development process, which goes into producing the software and the firmware and the hardware and the things they’re going to be sending out to all of us. And so that notion of trust has to be there. And that trust really has to come from greater transparency. We have to trust that they’re using the right processes, they have the right security functionality going into those products, and they have the security engineers that are also worried about the same things that we’re worried about.
Tom Temin: We’re speaking with Ron Ross, fellow at the National Institute of Standards and Technology. And if you look at the mechanism here for the latest cyber attacks, the biggest one, I guess, to date maybe, and that is agencies were installing an upgrade, or an update or a patch from a trusted supplier in pursuit of making sure that the rest of the systems were properly transparent, and that they could be patched. They were doing all the right things it would seem from a technical or operational standpoint. So how could systems security engineering have come to play to prevent this type of attack or this mechanism?
Ron Ross: Well, without commenting on the specifics, because I think a lot of times in these attacks is kind of like the fog of war. We get a lot of information initially, and then the information may change over time. So without commenting on the specifics, I think that we can generalize the notion of what happened in this attack and other attacks that are similar. They’re complex attacks. They’re not a single vector of attack. There are multiple vectors. And there’s some key things here that, if an adversary can get into your system by compromising the low-hanging fruit they’re going to do that to get an initial foothold on the beachhead, so to speak. And then once they’re inside, then they’re going to try to do other things that they’re allowed to do by – let’s say they compromise your credential. And let’s say you’re not using two factor authentication, or you are using two factor and there’s still a way to compromise your credential. Once you’re inside the house, then you can use privilege escalation, and then what we do they call lateral movement across that system, and even going system to system outside the enterprise. So I think it all gets back to the same notions of systems security engineering. A system security engineer will look at the what ifs, they’ll do the what-if analysis, they’ll look at each individual component. And they will say what kinds of threats could happen in this case? And looking at those potential, what ifs, they can kind of project out the extent of the damage that could happen. So there’s no system security engineering process that’s going to produce a perfect system, a totally secure system, it all goes back to the mission focus of the enterprise and what the specific organization is trying to accomplish. And then the system security engineers will work with the system engineers and the senior leaders in the organization, the people who are running the enterprise, to figure out what things can we do to actually ensure the functionality of this system is protected, to the degree that we need to have it protected. And they make trade space decisions. There are trade offs that go into all these decisions. But the important point is that you’re kind of wargaming ahead of time. You’re looking at the what ifs, and then you’re not always going to be surprised. Now that doesn’t mean you’ll never be surprised. But by eliminating many of these things that could be exploited. By going through that analysis, it does take a lot of things off the table. And you still have to do the basic cyber hygiene, some of the easy things that are compromised lead to the bigger things. And that clearly is a problem.
Tom Temin: And then the other article that you wrote recently on this is entitled cyber security threats live in the cracks. And by the cracks, you meant in the spaces and communication protocols, between these many, many, many components that make up the system of systems that agencies deal with. Is it all too complex in the first place, to get everything, and find everything in all the cracks?
Ron Ross: There’s no doubt that complexity is a huge problem. This is one of the things that I talk about strategic vision for security and tactical. We’re pretty good at tactic, sometimes, but strategic views are sometimes – that’s what I call the blind spot with regard to system secure engineering. So you know, this, again, gets to the point of we used to have this philosophy where we would try to build our systems strong enough to prevent the adversaries from getting in at all. And we knew after a while that wasn’t working all that well, especially with nation state-level adversaries who have lots of resources, smart people, and they’re coming at you 24/7. So the next step is, well figure out if they do breach your system, what can they do once they’re inside? So we talk about, that’s the second and the third dimension of cybersecurity, which I’ve talked about in other publications. First dimension being penetration resistance, stop them if you can. The second dimension is limit the damage they can do once they’re inside. And that can be done through virtualization techniques, and also things called zero trust concepts and zero trust architectures. You make it very difficult for the adversary to move laterally through that system, and you increase the work factor. So they may be able to do a little damage, but they’re not going to be able to do massive damage. And that’s an architectural concept that goes back to the original thing we were discussing is looking at the security problem for more of an echo system point of view or system of systems. Looking at the enterprise architecture and all of the components and systems that are in play, all of the connections amongst those systems, how information flows between the components and the systems. And then having security engineers look at those things that could cause potential problems. So at the end of the day, when you look at some of the best practices and security design principles, the notion of least privilege, least functionality, segmentation, the zero trust I mentioned. And you can think of this kind of like a house: You can have really strong locks on the front door of your house but if you leave the door open anytime during the day, and as you know, bad guy comes in the house and hides in the closet, you can lock the doors at night, and then the bad guy’s in the house. And so you then can have all of your valuables exposed inside the house or you can put a vault in every room in the house. So even though the bad guys in the house, they’re gonna have to go through every one of those vaults and try to get in and that’s very difficult. You can also hang the keys to the vault or the combinations can be posted somewhere in the house. And again, information is what the adversaries thrive on. They use it to their advantage. They understand our system sometimes better than we do, especially the sophisticated adversaries.
Tom Temin: Ron Ross is senior fellow at the National Institute of Standards and Technology. I guess this one is yet to be figured out and acted on but there’s some good advice there. Thanks so much for joining me.
Ron Ross: Thanks Tom, appreciate it very much.
Tom Temin: We’ll post this interview with FederalNewsNetwork.com/FederalDrive. Hear the Federal Drive on demand. Subscribe at Apple Podcasts or wherever you get your shows.