What is wrong with developer security training?

“Turn a developer into a hacker” is a commonly heard call.

There are many online courses and trainings that ostensibly teach developers how to write code that’s less buggy and generally more secure. The trainings focus on teaching detection and exploitation of common security vulnerabilities, mostly those listed in the OWASP Top 10 list. Unfortunately, this approach is not going to change how developers think about security or spur them to proactively write secure code – and here’s why.

developer security training

The builder vs breaker mindset

Security professionals love the journey from vulnerability identification to system compromise. Turning a bug into an exploitable remote code execution vulnerability brings joy to those tasked with software security assessment.

Unfortunately, what excites a security professional is not exciting for developers because, at the end of the day, a developer needs to build, not to break.

Developers are engineers, they like to build and see their programs addressing/solving a problem. They want to build a bridge and they are not too interested in how the bridge can be blown up in a controlled manner.

While it can be fun to find and exploit a security vulnerability, this should not be the goal of secure coding training. Giving an example of how a program can be “broken” – providing a specific input and showing them proof that it results in an unanticipated program output – will not reveal the root cause (i.e., security vulnerability) that must be addressed to eliminate the issue.

This brings up the second problem.

Treating security bugs like other bugs

I often show the following code snippet to the crowd of senior software engineers in my Defensive Programming workshops, and ask them “What do you think this patch is for?”

developer security training

With some hints, they end up realizing that it is the patch for the infamous Heartbleed bug.

I then follow up with another question: “Do you think there are no other Heartbleed-type of bugs (or buffer over-read bugs) in other parts of this library?”

And the answer is a shy smile meaning “Are you kidding? Of course, there is or will be.”

The Heartbleed patch is an example of a common method in addressing security bugs, i.e., the “If-Statement Post Release Patchwork” (ISPRP). This was an example of showing developers a security attack (the proof) and how it can be triggered (the input) and leaving them alone to figure out how to fix it. This will often result in the quickest and smallest code change to handle the trigger and, unfortunately, leave the root cause intact.

There are many other public incidents of ISPRP where the patch was incomplete, many attempts were made to fix it, at the end a complete refactoring was performed or considered as a feature!

ISPRP is just a symptom of a bigger problem: developers often mistakenly treat severe security bugs the same as functional bugs and believe by getting rid of the symptom, the program is “cured”. Some may go further and look across the codebase to find similar bugs and then apply the same patch.

The root cause of many classes of security bugs can generally be found in the design of the software. Our software is often modelled very loosely and has been through very few design iterations. It does not effectively model the problem that it tries to solve. Therefore, it does a lot more than it should. In other words, our programs are powerful.

developer security training

Our loosely modelled program has non-deterministic behaviors. These behaviors result in unforeseen run-time exceptions and, more importantly, dangerous ones that are security vulnerabilities.

developer security training

Let’s go through one simple example.

Suppose our program stores a person’s age. It is common for the int data type to be used to model the age of a person. It may seem like a good choice, as it is obviously a lot more restrictive than a string type. int (i.e., int32) in most programming languages ranges from -2,147,483,648 to 2,147,483,647. Technically, it’s not a good data type for this use case, as it covers negative numbers or numbers beyond 150, i.e., it doesn’t model the reality of the human age. Moreover, we have edge cases: if a user enters 2,147,483,648, that number will wrap around to -1 (this is called a Numeric Overflow vulnerability).

Now, here is where the problem starts: we try to address the edge cases by placing some if-condition statements to validate the input.

developer security training

Has the problem of age gone away? Unfortunately, the program still suffers from bad design and is vulnerable.

  • First defect: What if the input is already beyond the integer range when it is passed to validate_age? This means we validate an already wrapped around value.
  • Second defect: This happens in large codebases and as the program grows. Someone forgets to put validation before a new method that uses age. The security check is enforced by a coding convention, and this means our development team must be always on high alert to look for cases where this check is missing.

The root cause is a design flaw. int is a loose representation of the age. Our program should not allow such representation to exist in the first place. We should follow a defensive design pattern that makes unsafe states unrepresentable, so that we do not need to worry about missing security checks because dangerous inputs cannot be even represented in our software.

As you can see from this example, even a simple numeric overflow vulnerability may require a redesign to be addressed safely. This is the critical part that must be covered in developer security training.

Lack of focused practice

Acquiring skills requires dedication and time – sometimes more, sometimes less. As much as we would like to obtain any skill very quickly, this often doesn’t happen fast. We need months and (in some cases) years to acquire the basics of a skill, let alone master it.

Developers can’t be expected to learn and apply secure engineering after just a couple of days of training. Software security is complex and building usable and secure software is even more complicated. Secure software engineering is a new way of thinking, modelling, designing, and implementing, and it is new to many developers.

Our developers are accustomed to concentrating on what software should do. Secure software engineering is in complete contrast to the way most developers think: it is about what software should not do. Being able to find edge cases that could lead to a security bug and write security tests are advanced skills.

Through one-off training, developers may obtain knowledge of common software security anti-patterns. To turn this knowledge into a skill, we need to offer them a practice pathway – a focused pathway over multiple months that allows them to practice and fully understand the whys of the non-deterministic nature of software that leads to security bugs.

Wrap-up

I have categorized the problems of ineffective developer security training in three groups: the builder vs breaker mindset, treating security bugs like other bugs, and lack of focused practice.

If we want to create a secure software engineering culture, we should first make the connection between security and what developers like about programming. We have known for a long time that factors such as “compliance push,” “internal policy requirement” or “fear” are not going to change the culture but invite developers to find shortcuts.

We can make secure software engineering as fun and engaging as other aspects of software engineering for our development team. A good start is to learn their techniques and tooling. We can then align our teaching to what they like and are familiar with.

Addressing a security bug can be a complex undertaking, but we’re in luck: There are many great practices in both the software engineering industry and computer science that try to solve a similar problem (e.g., Domain Driven Development intrinsically addresses some software security challenges). We can tap into that knowledge to further emphasize their benefits while showing developers how the practice can also solve a security problem.

Overall, the impression we want is to make on our developers is that a well-engineered and designed software is also secure.

Don't miss