The Human Side of System Security

One of the most important things to remember about the business of technological systems is that there are two sources of rules: the info-physical ways the systems can interact with each other, and the ill-defined network of human interaction with those systems.

These are truths people who break systems come to learn: while some system breaches involve zero-day exploits or novel undocumented interactions, far more involve guessing where some key politician went to high school or convincing some front-line customer service rep with too much security clearance for their training that if you don’t get into your son’s email right now, you just know the kidnappers will murder him, here’s his account name, can you please help?

As a software engineer, I’m trained to mostly discard the latter and focus on the former—build uncrackable systems with minimal attack surface and no undefined behaviors. That’s sensible training; my school was teaching me to be a wizard of ones and zeroes, and if you’re to solve security problems via code, you need to focus on that. But in the larger world, businesses and governments can rely on other things to guard and control their systems, including the law.

If you’ve ever had your credit card company call you up to confirm whether you actually bought a set of towels from a Target in Baton Rouge yesterday at the same time you were buying a Diet Coke at a gas station in Breezewood, you know how structurally insecure the financial system is. All it takes to authorize a credit card transaction is twenty numbers: sixteen unique card identifiers and the four for the date. With a modest amount of mental training, a person can learn to very quickly squirrel away those numbers by looking at the card for a couple seconds. In the past, this attack was back-stopped by transactions being face-to-face, but in this era of online transactions, that’s no longer practical to do without grinding commerce to an absolute halt. But fear not, dear consumer… The credit card companies have guarded against this attack via the additional security of three more numbers! Now, an attacker must flash-memorize twenty-three digits to turn around later and try to use your credit card for acquiring household wares! No human could ever succeed at such a monumental task of memorization!

So why does a system this vulnerable simply not collapse into economic chaos, with people only willing to accept face-to-face trades of guzzolene for dry 9mm ammo?

Because the whole system is audited to death. Every transaction has at least two parties, both with a vested interest (almost all the time) in not being cheated. That vested interest incentivizes meticulous record-keeping, allowing bad transactions to be reversed (or the robbed parties made financially whole). When you tell your credit card company you’ve never even been to Baton Rouge, they back out the transaction, in some cases eating the lost money on their end, in some cases forcing the merchant who sold the goods under false pretenses to eat the cost (although in most cases, merchants who trade with that credit card brand at all know that’s a risk and add it to their bottom line). Then they rotate your credit card with a new equally-insecure model that has an unused set of twenty-three digits. The attacker wins the battle, but the money lost to their thievery is more than made up for by the money you the consumer will spend in the next week using the card legitimately, and life goes on.

As long as few people are willing to do the brain-training to memorize credit card numbers and Visa doesn’t run out of 23-digit numbers, this arrangement is pretty stable. If you turn your head and squint, you can almost call this arrangement win-win (the credit card company keeps a customer and keeps getting money, and the thief gets some towels they needed so desperately they were willing to do some solid brain-training exercises and risk jail for it). Basically nobody would agree with that view. But I digress.

The naïve model of computer security looks like this. I’d call this the “tech-only model.”

A Venn Diagram, labeled "All possible system behaviors." Outermost blue circle: "Harmless (but only because you got lucky)". Yellow circle, completely in blue circle: "Behaviors you have considered, tested, and secured." Magenta circle, completely enclosed in blue, completely outside yellow: "Bad guys use these to set your house on fire."

The tech-only security model

If tech is the only tool you have to secure your system, the things you control allow you to solve the problem by growing the yellow circle or shrinking the blue. Growing yellow costs resources but is possible. Shrinking blue is also possible, and when you hear people say this or that tool doesn’t have a feature for security reasons, that’s what the designers are trying to do—the most secure solution is one that doesn’t exist.

But in the real world, the security of your system is described more as a risk polygon.

Triangle with vertices labeled "Worthlessness", "Reversibility", "Attack Cost." Embedded in the triangle is a smaller yellow triangle with vertices part-way towards the three outer triangle vertices (high reversibility, moderate worthlessness, low attack cost). Inner triangle is labeled "Security."

Real-world system security

The aspects are:

Worthlessness
This is one part “Security through obscurity,” which is quite real but quite fragile. It’s also the value of the breach—either the direct value of the data an attacker accesses, or the value the attacker can gain secondarily from privileged access. Even breaching an empty system is useful if that system happens to be connected to a big CPU or a graphics card and the attacker can run a Bitcoin mining node on the machine.
Reversibility
If the system is breached, how cheap is it to put things back as if the attacker never got in? This is what credit cards rely on—even when fraud occurs, the cost is usually quite limited. It’s only if an attack happens at scale (the Target credit card harvesting attack) or on big-ticket accounts that they have to take extraordinary measures.
Attack Cost
This rolls up both the tech-only model (which goes directly to attack cost) and any human-world cost-enhancers, such as the law. The US mail system is an excellent example of a system with low worthlessness and only moderate reversibility (destroyed mail is destroyed, and a mail thief has huge incentive to not put tampered mail back into the system), but the attack cost is greatly enhanced by the simple fact that it’s a federal crime and aggressively investigated and prosecuted. Merely defacing a piece of mail carries a felony penalty, three years prison, and a quarter-million-dollar fine per instance of defacement. And the US Postal Inspection Service is, if I understand correctly, the Bryan Mills of law enforcement—if you’re looking for ransom, they don’t have any money, but what they do have is a very particular set of skills. Skills they have acquired over very long careers. If you steal your neighbor’s mail, they will look for you, they will find you, and… Suffice to say there’s a couple reasons people teach themselves to memorize credit cards way more often than they pop open their neighbors’ mail boxes and steal the credit cards directly.

There’s almost certainly a name for the difference between this “real-world triangle of security” and the purely-technical model, but this is a lazy Sunday and I’m writing for fun, not to publish in Communications of the ACM. So instead of doing, like, any research at all, I’m just going to name it after myself, like any good writer with more ego than care.

Mark Tomczak’s Lower Bound On System Security

The true security of your system is lower-bounded by how worthless it is,
how easy it is to fix a screw-up, and how big a prison people who break
your system wind up in.

There are a couple of key consequences of this line of thinking worth following:

Non-techies underestimate their system’s worthlessness

This is something my college education (again, focused on how to solve real-world problems with technical systems) drilled into me from freshman year, and it’s true. Non-technical folks tend to think about what a system is designed to do, not what it can do. The massively-expensive Target credit card harvesting attack’s fulcrum of access was via the credentials of an overly-privileged HVAC contractor. If you don’t have the technical chops to make a good educated guess at the true worthlessness of your system, err on the side of technically securing it.

I build computer systems for a living, and there’s a reason I host this blog on Blogger instead of running my own Wordpress out of the server I rent (editor’s note: Used to. Nowadays, I run it out of a static server vending pre-generated static pages via Hugo, which generates static content so there is no database to hack).

Techies underestimate a system’s total security

Again because it’s what it’s for, my education left me with an eye for systems that were structurally insecure and (through more my fault than my alma mater’s) a bit of a sneer at a system that could be more structurally secure but isn’t. But structural security costs money, and money can be spent on other things. If a system is already back-stopped effectively by the law, the only security it might need is a swinging door and a smile.

That having been said, all parties are vulnerable to the following risk:

Worthlessness and attack cost are contextual and swing rapidly

Before Bitcoin, there was less you could do with an insecure system (apart from using it to confuse the trail in a remote crack or possibly convert a machine into a bot to direct a coordinated attack). Bitcoin and similar proof-of-work cryptocurrencies have become an engine for converting someone else’s spare CPU cycles into money in your pocket, so the worthlessness of empty attackable machines changed overnight. Similarly, the legal attack cost swings rapidly for someone outside of the jurisdictions of legal enforcement; to use the Target attack as an example again, the credit cards harvested out of Target’s point-of-sale system eventually landed on servers in Russia, which isn’t generally willing to play ball with apprehending people for financial crimes in US territories.

As time progresses, this unpredictability in the risk model seems to be growing, and may be a strong indicator that people should err more on the side of assuming technical security is necessary (since it provides a stickier lower bound of attack cost that only swings when someone finds a novel exploit in a security model).

Auditing is more important than securing: if you can only secure one thing, secure your logs

Logs allow you to reverse a screw-up. The financial world runs not on secured transactions, but on logs—as long as the records are sound, the system assumes harm can be corrected. I’ve seen companies recover from nearly-catastrophic disruption of their systems thanks to solid logging; in contrast, though I have no examples at my fingertips, I believe even mild disruption can sink a company if that’s when they discover they have no history to restore service from.

Follow-up: The Munroe Doctrine

So when one finds a lower bound on something, one often immediately asks if there’s an upper bound. Is there an upper bound on system security? I believe there is, but I can’t in (what passes for) good conscience pretend to name it because I already know who deserves the credit.

Randall Munroe’s Upper Bound On System Security

The true security of your system is upper-bounded by how long a privileged
user can be hit in the knees by a pipe-wrench before they will cooperate
with an attacker to breach the system.
The famous 'XKCD 538' comic, where criminals decide to hit a guy with a $5 pipe wrench to give up his password instead of spending untold millions to decrypt his machine

Comments