Once upon a time there was a little girl named Goldilocks. She went for a walk in the forest where she found a peculiar house. She knocked. No one answered so she walked right in. <SPOILER ALERT> She died.
You know the story so I cut to the end. Sorry if I ruined it for you.
Goldilocks and the three bears is the ultimate tale of making bad assumptions from test data. Goldi finds a house and she assumes there are people in there and further assumes they’d be willing to help her. Then she assumes that because there’s no answer it means she should go in.
She then assumes that the people who left their (warm) food still on the table won’t be back soon. Then she assumed it’s okay to not try to fix the chair she breaks, or even leave a note. Then she assumes she can take a nap. And did I mention she never assumed friggin’ bears lived there? You know, with teeth and claws and aggressive instincts and stuff.
Yup, that story is just one bad assumption layer cake.
You know where else assumptions happen but only sometimes ends with bears mauling a little girl? Cybersecurity, that’s where. It’s a common problem, and it even happens to the experienced security analyst. Assumptions are the vulnerability you didn’t see coming.
As the most important cybersecurity authority you are presently reading, I can enlighten you a bit on this matter. Assumptions are the things that combine your experiences and gut instinct to lead you to making random decisions. Random because neither your memories of your experiences nor your intrinsic feelings about something are very good decision makers.
You’d have a better chance of making an informed decision by flipping a coin and calling which edge it would land on. Therefore, when you use assumptions for security - or the scientific term for it, postulative security, you’re guessing with severe bias. And doing that can leave you with a hole in your security large enough to literally drive a truck through.
And when I say “you” of course I don’t mean you, dear reader. It obviously doesn’t happen to you fine person of superhuman memory and Socrates-like logic. We’re talking about other people, clearly.
So now that’s out of the way, you’re just dying to see examples of other people’s flaws in postulative security. Keep in mind that some of the examples in this list will seem incredibly dull, but given the scientific principle that a certain percentage of everything on Earth is boring, that just means we’re practicing good science!
Assumption: Making something harder to do improves security.
Reality: Whether it’s stronger encryption, better CAPTCHA, putting up a barbed-wire fence, or putting bars on the ground floor windows, making something harder to do has felt like the way to improve security. But really unless you make something impossible in the “how we know physics” sense, then you’re really just reducing the number of people who can make a successful attack. The thinking here is a really old one, and we hope that those who can get around highly sophisticated controls are too well paid anyways to want to be criminals. Which makes sense if you think that highly educated people are also too morally straight to be criminals or that criminals are only in it for the money. Neither are even close enough to reality to casually nod to reality as they pass each other riding choppers in the opposite direction. At best, making something harder reduces risk, which is basically the same as being invisible as long as nobody is looking at you.
Assumption: Keeping up with security maintenance (like patching, log reading, alert responding, and other cyberhygiene-type stuff) improves security.
Reality: Keeping up with dental maintenance improves the life of your teeth but it doesn’t protect them from a crowbar. You see, that’s the difference between security and everything else that rhymes with security - security actually protects stuff from attacks. If it doesn’t protect from a targeted attack, then it’s not security. Oh, I see your eyebrow rise up as if it’s caught on a fish hook. Then give me just one example. You were going to say back-ups, right? Not security. That’s maintenance. It extends the life of your data, that’s all. It’s not even availability because people can’t use the stuff on those tapes or off the cloud storage repository. Availability requires that people who need it can use it. If you store food in your fall-out bunker and the Big-One drops only to melt the door to the frame, yes you have the food in the safe, but is it available?
Assumption: Vulnerability scanning improves security.
Reality: Vulnerability scanning is like taking your 20-year-old car to the local garage instead of the dealer for an inspection. And yes, your three-month-old network is that 20-year-old car in Internet technology years. The garage is not going to be up on all the recalls and updates for all the cars nor is going to have a system in place to understand the cars it doesn’t normally service. It’s not going to know about all the idiosyncrasies of your make and model of car because they probably don’t see 50 of them a day like the dealer would. It’s also not going to be able to tell you if another car hitting you will lead to instant death or not. It can only tell you if it meets the accepted baseline for cars on the road. And that’s pretty good. It’s not FAA inspecting airplanes good. It’s not even Disney engineers inspecting parade floats good. But it’s good. It just doesn’t assure security good.
Assumption: Defense in Depth improves security.
Reality: There’s this idea that if you do something with security on your network that doing more somethings will make it more secure. And if you do even more security then the security will be even more. And so on. Layering security on top of security makes good gut sense because what’s warmer than one blanket? Two blankets! And warmer than that? More blankets! Except security is more complicated than thermal conductivity. At least that follows a law in physics. The whole point of security is because some people don’t follow laws. The thing is while layering more security feels like it should work, it’s a pretty complicated process that will actually hinder operations if wrongly layered, and that each new layer introduces its own vulnerabilities into the network ultimately increasing the total attack surface of the network, even if it’s doing a stellar job in its little security task. So yes, more security can and will hurt your business and your security. Kind of misses the point of security then, doesn’t it?
I’m sure you’ve had a good laugh at all those other people who maintained those assumptions. But don’t. Postulative security is endemic in the cybersecurity industry. And it’s really the worst vulnerability.