Issues: “Save a bug, safe a life?”

“And there’s the sign, Ridcully,” said the Dean.

“You HAVE read it, I assume. You know? The sign which says

“Do not, under any circumstances, open this door”?”

“Of course I’ve read it,” said Ridcully. “Why d’yer think I want it opened?”

“Er-¦ why?” said the Lecturer in Recent Runes.

“To see why they wanted it shut, of course.”

Above exchange, from the novel “Hogfather” by Terry Pratchett, holds quite an accurate description of what hacking is, or at least of the ideal of what hacking should be. That is, to me personally, because one of the things coherent with the scene where this practice thrives is the many different clashing opinions and ideas. You may “smash the stack” for fun, for profit, or maybe just because you’re a vicious lil’ bugger with too much time on his hands, but the ideal most people have concerning what a hacker is, is someone who opens stuff up to find out the why and the how.

The security scene consists of a lot of people with different ideas on how to act to follow and what to do to reach above ideal. An outsider would probably think of it as a whole lot of politics on what to say, what to do and how all this gets interpreted by others and he or she would probably be right. Amongst the topics (probably even The Number One Topic) fueling many a heated discussion is the one of “Full disclosure” versus “Security through obscurity”. Basically, these are two philosophies regarding how to deal with product security problems. The “full disclosure” variety of these two, advocates the releasing of all available information on security problems to the public, as to promote full awareness about it and inform the users as to how to prevent this from affecting them and their systems. The main idea is that before this philosophy was introduced, information about security problems was only shared amongst a select group of people. When vendors were notified of problems in their products or services, they would either not act on it or quietly introduce a fix in later versions of the product. This resulted in quite an alarming number of security incidents which could have been prevented if people would only have known their system was vulnerable to a particular problem. The thing with hackers, you know, is that you often don’t need to tell THEM, THEY can figure it out for themselves (“hackers” in its older and more respected form that is, nowadays a hacker is often portrayed as anything on two legs near something electric and with the ability to break it).

Full availability of this kind of information does have its dark side though. As will be always the case amongst us humans, when something can be abused chances are someone will, one way or the other. Unfortunately, detailed information on how to exploit these security problems eventually brought along others, who were also pretty pleased with the concept of “full disclosure” since it saved them the trouble of actually learning how things worked. “Plug-and-play”-like tools appeared in the scene, allowing for automated scanning of other peoples’ systems for these problems and exploiting them. Where in the past the number of incidents rose because of uninformedness, now a same kind of tendency could be detected because of people being informed who otherwise wouldn’t have been. People opposed to the full disclosure philosophy often use this as an argument for “security through obscurity”, a sort of “back in the old days everything was better” kind of approach, arguing “what they don’t know they can’t use to hurt you”.

Both camps have lately been going at each other rather heavily, check out the webboard on the site of the Anti Security(.is) movement for example. This particular movement was founded by members of “old-school” hacker groups as ADM and Security.is, of whom it is known (strange as it may seem) that there are some mighty dangerous exploits by their hand floating around out there. This is unfortunate, since a lot of what could have been constructive discussion on this topic has been lost by quarreling over groups and names.

Personally I feel, both sides have their good and their bad arguments (although it’s all in the eye of the beholder of course :). What makes me personally favour the full disclosure camp is that altough you shouldn’t “give guns to kids” it’s probably worse if the kids find the guns themselves. “Safe a bug, safe a life”? Nah, I’d rather safe a kiddie by informing people how to keep him out (and by trying to make himself aware of the damage he does) than gambling on his ignorance when it comes to finding out things for himself. What would happen if something like the unicode or RDS exploits were discovered in the “wild” instead of posted to Bugtraq? It is a well-known fact that, in case of certain vendors more than with others, patch-releases are just too numerous (19 such fixes in the first week of March alone!!) and too burdensome to install. A lot of people just can’t keep it up when maintaining multiple system installations (hence why recently several Microsoft servers got hacked through a bug for which the company itself had released patches). But they should have the choice if they want to. This is what could make the discussions between both sides so interesting, if they could just get past the “what’s in a name”-phase.

Quoting Marcus Ranum (the CTO of Network Flight Recorder Inc.):

“The real issue becomes distinguishing between what information people need and what they merely want. There’s a big difference between releasing data about how a problem impacts an end user and explaining in detail how to exploit that problem.”

There’s no such thing as PERFECT security. There is, however, BETTER security and there should be a choice wether you’d want to benefit from that. To what extend, that’s another question.

Don't miss