In the past few years, the bug bounty economy has been growing steadily, with more organizations getting on board every day.
In this podcast, Ilia Kolochenko, CEO at High-Tech Bridge, talks about crowdsourced security testing and bug bounties.
Here’s a transcript of the podcast for your convenience.
Hello, my name is Ilia Kolochenko, I’m CEO and founder of High-Tech Bridge. I would probably say that bug bounties is a very interesting concept that first of all can keep a professional penetration task force in a good shape. Because they know that they have a sold-off competition. Because in the past we used to see some penetration testing company who was cunning, pretending that they have performed a manual penetration test. So let’s say that this is assurance that penetration testers will maintain at least some basic standards of quality of testing.
On the other side which I’ll not over estimate the capacities of bug bounties, because first of all not every company has the capacity to run the bug bounty. Not speaking about administrative from the managerial part that can be performed with different crowd security testing platforms, but about the necessity to conduct thorough security testing because some of the systems are designed for very limited scope of people, let’s say private VIP customers of a private bank. In this case, probably crowd security testing will be inappropriate and also can disclose very sensitive data to third party researches. Yes, you can make some vetting on them but the more let’s say qualification and clearing you require, the less people will agree to work for free and there’s a huge risk of not getting paid only they will be the second who report the flaw because bug bounty is a very interesting concept that’s not only about discovering the flaw, it’s about being the first who will report it. Because if you discover a flaw now, you take a coffee and you report it half an hour and in-between some of the reported before you, you will not get anything.
It’s challenging to call bug bounty fair play. I would say it’s more less fair but it doesn’t provide necessary assurance, necessary warranties to researchers and therefore what do I think now, it’s a bug bounty fatigue phenomenon that we described for the first time at Infosecurity Europe 2016. It’s when security researchers, they rather prefer to start testing on newcomers who have just announced the bug bounty because they know they will easily found those fixed assets and probably get to work for 5 of them. And they can make easy cash.
In contrary to a major company who have their bug bounties running since 2-3-4-5 years, and they have been already tested and retested thousands of times by all possible software and people, and researchers just know that there is nothing to dig there because chances to spawn the flaw are very slow. But it also gives a very dangerous illusion of security as companies say – okay, we started our bug bounty 5 years ago, every year we are receiving less and less reports, that’s good, because best minds of the world are testing us, but in practice almost nobody is testing them anymore. And they have a false sense of security because everybody thinks okay – why should I spend 5 weeks to find one XSS and get $5000 if I can spend one hour on XSS, getting paid $50, but I can easily make $300 per day?
So, I would say that bug bounties are a complementary solution. For example, a bug bounty cannot replace your web application firewall, it cannot replace your software development lifecycle, it cannot replace your internal application development processes, procedures, quality assurance and control. However, if you are a large company and you intend that your product will be used worldwide by all possible people like Google, Facebook, Amazon – a bug bounty can be an interesting complementary solution for you to make sure that your professional suppliers didn’t miss anything because we are all people, we can miss, forget something. Sometimes it happens, so bug bounties like reassures, but it cannot replace assurance and all the internal application security policies and procedures.