As security vulnerabilities are reported to you time and again, you may ask yourself: “Why don’t these developers learn the lesson?” The next thing you may think is: “We should train developers, so they stop making these mistakes.”
For many years, those were my thoughts as well, as I reported hundreds of vulnerabilities to my customers after doing code reviews.
But have secure code training platforms changed anything? Did they push those “naughty” developers to be proactive about the security of their software? Did I end up reporting fewer recurrent vulnerabilities? Unfortunately, the answer to all those questions is “no”.
Instead, many developers find training platforms boring; some do just enough to complete the mandatory courses, and others search for excuses not to take them. That’s because we’ve overlooked a very important point: Developers are problem solvers.
If you look at a developer’s daily routine, you can see that finding ways to solve a problem is their bread and butter. They receive specifications for an application, they know where and how to look for ways to address each specification, and they deliver a full solution.
A developer doesn’t need to be told “how” to do something, but “what” to do.
We need to define the problem better, in a language that they relate to. This is the part we have not done well.
The language security practitioners use – bizarre terminology and irregular titling of security vulnerabilities – is not good at communicating the nature of security issues to developers. The two groups simply don’t talk in the same language.
Let me give you one example.
A software vulnerability is either a bug or a design flaw. In the developer’s world, a bug is usually related to an implementation problem (a problem in the code) and a design flaw is related to a software architecture problem. But a vulnerability – a term usually used by the security industry – does not immediately tell the developer “what” the problem is. If we just explain whether a vulnerability is an issue in the implementation or the design, developers would be grateful.
Another example of poor phrasing is “attacker payload” – an expression security pros refer to when we talk about a malicious input. What we should say is that it’s “a rare input that causes a run-time exception”. (Run-time exception is a state in the software that is not gracefully handled and has remained undetermined. A lot of security vulnerabilities are run-time exceptions.)
Get developers engaged
I believe that before jumping to teach the “how” to the developers, we should first engage with them and spark their interests. We should start with defining the problem in their language and make the search for a solution look attractive to their curious, problem-solving minds.
We have implemented this idea in our developer-native wargames company, where we turn security vulnerabilities into fully functioning application sandboxes that a developer can run, test and debug.
Instead of using external tools to simulate attacker actions, we coded them into software specification tests (code that developers write to check if the application follows the specification). Therefore, developers can use the same tools that they use in their daily work to fully explore the security vulnerability.
With this approach, we bridge the language barrier gap. The security vulnerability is translated into a fully functional application along with security specification tests, and clearly defines the security problem in the developers’ language, so they know how to go about debugging.
This delivery method communicates the issue better to developers and engages their interest into solving it. We see our customers’ development teams become proactive in finding and remediating security issues. Our customers see their development teams find security vulnerabilities as an interesting problem to debug and not a bizarre request pushed by the security people.
At the end of the day, a vulnerability is a software challenge that is fun to exploit and will be fun to solve.