A couple of weeks ago Mich Kabay wrote about an article in the Wall Street Journal that discussed, albeit in a lame, noob kind of way, techniques for employees “to get around the IT departments.”
Through a curious process understandable only by those with a PhD in quantum mechanics and its relationship to the publishing process Kabay’s newsletter article was attributed to me.
As much as I enjoyed this I had to confess to all who wrote in to compliment me on my (Kabay’s) article that I was merely an innocent bystander splattered with the mud of someone else’s good writing. I just wish such a mistake occurred more often with the checks coming to my address.
Be that as it may, a letter of a complimentary nature raised an interesting question. One of my (Kabay’s) readers wrote in to ask: “Have you considered the other perspective on the WSJ article, namely the full disclosure/’king hath no clothes’ side to it? If desktop systems were actually secured (or for that matter fully securable), the holes would not be there to be exploited by sneaky employees, nor to be exposed by WSJ.”
The second issue of realistically and comprehensively securing desktop systems is, regrettably, implausible. If you take any reasonably complex system (no, I am not about to define “reasonably,” just work with me people) then it is obvious that when flying pigs deliver such systems life will be vastly improved. Until that day we have to live with systems that are secured subject to two limitations: What we know and what we can afford.
What we know about any complex system is always limited because of Turing’s Halting Problem. In a roundabout sort of way, this says that figuring out by inspection of computer code whether a particular state, such as stopping or deducing the existence of rice pudding from first principles, can occur is impossible.
This means that where we’re considering security then identifying and characterizing all failure modes (AKA security problems) is also impossible. Even worse, just identifying most failure modes is equally impossible because we can’t know the limits of what we don’t know ‘cause Turing says so.
What we can afford over the short term is the discovery of the most easily found failure modes. These modes are just those that are easily and therefore cheaply found (as in a few dollars each). To identify the next set of failure modes that are harder to find is more expensive and so on until we are spending the equivalent of the gross domestic product of Bolivia to find a single failure mode.
Even worse is the problem that there is no correlation between how many failure modes we know of and what it might cost to find, say, half of them. We simply can’t determine what kind of cost will be involved when we try to remove as many vulnerabilities as possible from a system. What we can be sure of is that it can’t be done completely.
But it is our reader’s first question on the value of full disclosure that is the most interesting. It is obvious that the most common state of knowledge about failure modes is when both the good guys and the bad guys don’t know a failure mode exists. This is good because the good guys are at no disadvantage. But what of the other situations when either or both parties know that a failure mode exists? We’ll return to that next issue.