When I think about computer security, I like to go back to its early days and compare the situation then with the situation now. Taking a step back is very useful because, even though we work very hard, we need to ask ourselves if we’re making things fundamentally better. In other words, are we focusing our efforts on the right problems?
One of the first really significant computer security events in history is the Morris worm, which happened in November 1998. Named after its creator, Robert T. Morris, the Morris worm managed to spread to about 600 servers. Initially, that might not seem much, until you realise that those servers made about 10% of the Internet of that time. Thus, its impact was huge and generally led to system administrators starting to think about security.
Before this worm, it was the norm to leave servers unpatched, care-free. But it’s more interesting to examine what techniques the Morris worm used to spread: password cracking, server misconfiguration, buffer overflows and remote command execution. And this is where the disappointment hits-those are all keywords from the computer security news from last week. It seems that little changed in the 26 years since the Morris worm hit. In fact, I could argue that the situation is now much worse, because computer networks are now largely a part of our very existence.
A common response in this situation is that computer security is hard, and there is no doubt that it is. But we have to realise that the hard parts are entirely of our own making. We made it hard. When you think about it, computers are simple tools. They have no imagination and-still-do only what we tell them to. Putting out fires left and right is indeed hard, but that’s only because we created an inherently insecure ecosystem and we’re not spending any time to address the root causes. As a result, the problems keep coming faster than we can solve them.
If you’d like an example of what I am talking about, we don’t have to go far. In application security, the two most common security problems are cross-site scripting (XSS) and SQL injection. Both problems are well understood: security issues arise because applications don’t perform correct validation and encoding on data that’s crossing component boundaries. The real problem is figuring out how to stop the millions of programmers out there from writing insecure code, as they do at present. Our best attempts at present are to teach developers about secure development, but that’s futile. We’re never going to be able to teach them all, or even enough of them. The root cause is elsewhere; it’s in the fact that developers are able to write insecure code in the first place.
There are millions of developers, but only dozens of frameworks and libraries. In the context of SQL injection, no one writes code to talk directly to a database; everyone uses the library that comes with their favourite development platform. If we make a slight change to the main libraries so that it’s impossible for developers to write insecure code, we will have solved SQL injection. What’s the change? Make it impossible for developers to create SQL queries by string concatenation.
There are similar solutions to other classes of vulnerability. Solving XSS is messier because the make-up of HTML pages is more complex, but the principle is the same. Context-aware output validation and encoding is available here and there. For some other problems, like buffer overflows, the solution is to switch to languages and platforms that are not susceptible to that type of vulnerability.
That last sentence holds a clue why we’re finding computer security difficult. We have to switch to using secure tools and libraries. We have to change our ways, and-let’s face it-nobody likes to do that. A bold move would be to, say, ship the next major versions of ASP.NET and Java with both XSS and SQL injection impossible. Yes, that would mean breaking backward compatibility, but it would also mean never having to deal with these two types of problem again.
Back in the real world, I know such radical changes are extremely unlikely. But that doesn’t mean that we should give up the fight. We should just be smarter about picking our battles. Forget focusing on the consequences of our early decisions. Instead, focus on three aspects of security where we can achieve a substantial impact. First, make all new projects as secure as they can be. For your next development, choose a platform that isn’t vulnerable to buffer overflows. Mandate a library that isn’t vulnerable to SQL injection. Choose a templating solution that isn’t vulnerable to XSS.
It’s perfectly understandable if you can’t do all of it straight away. After all, we don’t yet have all the secure libraries we need. That brings me to my second point: make all new programming languages and libraries secure by default. Third, look at existing libraries and improve them in some way, even if that improvement is small. In popular libraries, even small changes can help millions of people a couple of years down the road.