Security startup confessions: Customer breach disclosure

customer breach disclosureMy name is Kai Roer and I am a co-founder of European security startup CLTRe, and these are my confessions. I hope you will learn from my struggles, and appreciate the choices startups make when security matters. I will share experiences from my own startups (my first was in 1994), and things I have learned by watching and advising numerous other startups around the world.

Balancing the needs of your company, your employees, and your customers requires making tough choices.

Like any other software company, we at CLTRe are always trying to find a good balance between development, operations and security. This mix of procedures and technology also contains people (and their intentions). And unlike computers, people tend to do things on their own, and sometimes they fail to grasp the consequences of their actions.

Once upon a time, in a company not unlike ours, the team needed to impress a new, important customer. The team was a mix of sales, product roll-out and developers to ensure that changes would be made swiftly and without quality degradation. The customer trusted the company with critical data, which, if lost, would be a sure death for the company, and a huge and costly affair for the customer, whose company was listed on the stock exchange.

At some point during the pilot phase, the team needed to access log data on the customer’s server, running the new beta software.

Due to the typical challenges of software development (running out of time, misunderstandings regarding specifications, egos, and more), not every functionality was completed and tested. Therefore, when access to the log data was required by the team, a quick solution was suggested, approved and implemented: an API.

The API was secured with keys, and access to the relevant data was obtained.

But when the analyst on the team received the data, it turned out that some of it was missing and some of it had the wrong identifiers, making analysis impossible. A developer then proposed that he write a script to collect and connect the relevant data, so the analyst could do his job, and the report for the customer could be made.

At the time, it sounded like a good idea, so the request was approved. The script was created, the data was analysed, the customer got the report and everybody rejoiced.

A few weeks later, a random audit uncovered that the quick-fix-script that saved the day had been published in a code repository outside of the control of the company, on a semi-public site. What’s even worse, the script had the API credentials hard coded.

All hell broke loose, of course.

Within a very short time, the incident response team was on the case, mapped out the potential damage (huge), and the real damage (probably non-existent). And a debate started over the question: “Do we tell the customer?”

Yes, an incident happened, but it seems clear that no data was lost. No harm done, right? Why tell the customer, and risk losing the confidence and trust?

But the debate concluded just as quickly as it started.

Our customer trusted us with his company’s data. He trusted our ability to keep their data (reasonably) safe. But he also trusted us to handle breaches swiftly and adequately, and to report any incident back to him.

I was tasked with informing the customer, and to lay out a plan for keeping him informed. Within an hour of the discovery of the breach (effectively a leak), the customer knew, and he was kept updated throughout the incident, the following investigation, and was offered the option of doing his own investigation and/or to hire a third party for a non-biased review.

Ultimately, we were able to keep the customer’s trust and the business. Of course, we may not be so lucky in the future, but that should never be an excuse to not inform stakeholders of what’s going on.

To us, this was a lesson in ethics and disclosure, as much as a lesson about the security of out routines.

Other columns from this series:

Don't miss