Interview with Jeremiah Grossman, CTO of WhiteHat Security
Jeremiah Grossman founded WhiteHat Security in 2001. Prior to WhiteHat, he was an Information Security Officer at Yahoo! responsible for performing security reviews on the company’s hundreds of websites.
Jeremiah is a world-renowned leader in web security and frequent speaker at the Blackhat Briefings, NASA, Air Force and Technology Conference, Washington Software Alliance, ISSA, ISACA and Defcon. He is a founder of the website Security Consortium (WASC) and the Open website Security Project (OWASP), as well as a contributing member of the Center for Internet Security Apache Benchmark Group.
Let’s start with an easy one. How did you get interested in Web security?
What are the most important lessons that you learned while working as the Information Security Officer at Yahoo? I’m sure many security professionals wonder what working at such a large company entails.
Yahoo! was/is big, really big. It’s so big it’s hard to wrap your mind around: at the time, my best count was roughly 600 websites, 17,000 publicly facing Web servers, and 120 million users. Working for Yahoo!, or being responsible for the security of any popular website, is trial by fire. Think about the fact that there are more than 1 billion people across the globe with access to your website all the time, and a certain percentage (we thought 1%) is malicious. As demanding as this type of job is, the experience is also extremely rewarding and highly recommended for anyone in website security. Without having been in that role, it’s difficult to appreciate which security strategies actually work, versus the ones that technically should, but don’t.
- IDS says everyone is attacking you with everything they got all the time.
- A hacker, who just has to find a single vulnerability, has it easier than a security professional, who has to defend against all vulnerabilities all the time.
- Everyone with a website gets a “vulnerability assessment,” probably several per day. Whether you pay for the results or not is another matter.
- Use security obscurity to your advantage.
- Security solutions that work for smaller websites don’t necessarily scale for the larger ones.
This year you’ve been selected as one of the Top 25 CTO’s according to InfoWorld. How does it feel to have your work recognized and being put head to head with other well known industry giants?
It’s an honor. “Surreal” is the best word I can use to describe being listed next to names from top companies like VeriSign, 3Com, Motorola, and Credit Suisse. And while I’m receiving a lot of the credit recently, which I appreciate, it’s really the result of years of tireless effort from many amazing people at WhiteHat Security and around the webappsec community. I was always fond of the quote by Sir Isaac Newton, “If I have been able to see further, it was only because I stood on the shoulders of giants.”
Has the award put a spotlight on WhiteHat Security?
It’s funny, I was just getting used to seeing our name in the press about every week or so, then this happened. Now it’s almost every day we’re mentioned and it’s actually been difficult for us to keep up with all the inbound interest in WhiteHat Sentinel. Part of the build up is of course press generated. But, most of the increase is simply due to the complexity and difficulty of Web application security and the need for easy-to-use vulnerability management services. We’re really excited about the future and we seem to be at the right spot at the right time.
With the constant evolution of threats, what kind of technology challenges does WhiteHat Security face?
It’s interesting. It’s not so much the new attacks or techniques that keep us on our toes, but the adoption of new Web development technologies such as Ajax, Flash, Java, etc. Websites using these technologies are really no more or less secure. But, what is more difficult is scanning for the vulnerabilities within them. Today’s Web pages share more similarities with running applications instead of traditional HTML documents. This makes “crawling” the website that much harder. By extension, the attack surface is more difficult to define, and as a result black box “fuzzing” is constantly challenged.
In your opinion, how has the Web security scene evolved in the last few years?
It might sound odd, but one big difference for me is that only a few years ago people barely knew that “Web application security” existed or that firewalls and SSL didn’t protect a website. Today, almost everyone I talk to, from coast to coast and country to country, has that figured out. Now everyone wants to know what the latest trends and best practices are. The other big difference is the availability of knowledge. Before, the information people needed to secure a website really wasn’t documented. Now, people have access to websites with hundreds of white papers, presentations, and books right at their fingertips. If you want to secure a website, the information to do so is out there.
Have new development techniques brought more problems?
Some experts like to say that Ajax or Web 2.0 is the harbinger of new attacks. I’m not one of them. Fundamentally, we’re dealing with the same problems in the same locations. The challenges that Ajax brings land more on the security vendor than on the enterprise. We have to find vulnerabilities in these custom Web applications and Ajax-enabled applications are much more difficult to do so. Read any of Network Computing’s scanner product reviews and you’ll see what I mean.
What are the security tools/services that you use on a daily basis and couldn’t live without?
I’ve blogged about the speed hack contests we hold at the office. This is where we race to find the first and the best vulnerability in a never-before-seen-website. For speed, nothing beats Firefox, the Web Developer Toolbar, and having the Paros or Burp proxy handy. If I happen to get stuck on an XSS filter, call up RSnake’s XSS cheat sheet, use the encoders at the bottom, and that usually does the trick.
If I woke up tomorrow back at Yahoo!, or was responsible for the security of any website, (I know I’m biased here) the honest answer is I’d get the Sentinel Service deployed immediately. The service is easy and complete, but most of all a security professional’s time is precious. Sure they could do the vulnerability assessment work themselves with each site update, but it’s a poor use of their time and expertise. Their time and expertise is better spent focusing on strategic solutions and big picture thinking, rather than trying to identify, prioritize and weeding through the next hundred Cross-Site Scripting, SQL Injection, or whatever other vulnerabilities there might be.
Are websites that you assess more insecure today in comparison to 3 years ago?
I’d say today’s websites probably have less vulnerabilities, but they’ve also never been more at risk. While SQL Injection seems to be on the decline and Cross-Site Scripting filters are far more common, the number of attackers and attack techniques has increased dramatically. The bad guys go where the money is and right now that’s the Web. To monetize, all they have to do is capitalize on one single vulnerability. So, if an organization is only going after the low hanging fruit, that isn’t going to help much, since Web attacks are targeted. Websites that do better are the ones whose security posture makes is hard enough on the bad guy where it’s in their best interest to try some place else.
A significant part in the process of developing a complex enterprise website is ensuring that the customer data being used on that website is secure. What do you see as the biggest threats to that security? What are the most common mistakes you see your customers make?
With 125+ million websites, and most of them riddled with vulnerabilities, I think it’s safe to say the mistakes have already been made. At this point, we’re trying to stop the new holes in the dam and plug the existing ones. Here’s the advice I give to everyone:
1) Asset Tracking – Find your websites, assign a responsible party, and rate their importance to the business. Because you can’t secure what you don’t know you own.
2) Measure Security – Perform rigorous and on-going vulnerability assessments, preferably every week. Because you can’t secure what you can’t measure.
3) Development Frameworks – Provide programmers with software development tools enabling them to write code rapidly that also happens to be secure. Because, you can’t mandate secure code, only help it.
4) Defense-in-Depth – Throw up as many roadblocks to attackers as possible. This includes custom error messages, Web application firewalls, security with obscurity, and so on. Because 8 in 10 websites are already insecure, no need to make it any easier.
You are one of the authors of the recently released “Cross Site Scripting Attacks: XSS Exploits and Defense”. How long did the writing process take? What was it like to cooperate with other authors?
The writing process took about six months. Generating hundreds of pages coherent and compelling content is challenging to say the least, even with five of the best subject matter experts working in parallel. It was great getting to review the work of the authors on the fly and see the project come together. And, people really seem to be excited about the book and enjoying the read. For me, the feedback and reviews we’ve been receiving from the industry is what really made it all worthwhile. Knowing that your work is useful to so many is a great feeling.
Web security has been getting a lot of attention in the past 2 years and an increasing number of people is starting to pay attention. What resources/books would you recommend to those who want to learn more about Web security?
There are a lot of resources out there and the blogosphere has been one area that has exploded.
Mailing lists and websites:
In general, what is your take on the full disclosure of vulnerabilities? Should vendors have the final responsibility?
At the end of the day, website owners and software vendors have a responsibility for the data they protect and the products they sell. I’ve been on most sides of the full-disclosure debate (website owner, software developer, security researcher, and business owner) and can appreciate the concerns raised. I’m a pragmatist. When responsible for security, I have no expectation that anyone is going to share any vulnerability information with me ahead of time. I hope they would before going public, but it would be irresponsible to depend on it and hopeless to demand it. I also think describing the messenger as “unethical” or worse only gives the impression that company isn’t taking full responsibility for the incident. Instead, try to be open, investigate what caused the problem, solve it, and move on.
What are your plans for the future? Any exciting new projects?
While specific projects I’m working on at WhiteHat must remain confidential, my “agenda” is twofold. Help organizations find the vulnerabilities in their websites, no matter how big or how often they change. If that means scaling big enough to scan the entire Internet every week, so be it. And, when we know where the vulnerabilities are, provide organizations with options to get them fixed, quickly and with the least amount of trouble. Once someone decides they want to improve the security of their website, I want to be able to provide them with a game plan to do so that makes sense.