Shark or not? 3 real-life security scenarios and how to tell which will really bite

real-life security scenariosWe’ve just wrapped one of my favorite weeks of television, Shark Week. Viewers were treated to show after show of sharks stalking and attacking helpless victims. In most shark movies, the person swims along oblivious to the looming and hidden threat – a continuous false negative. In fact, false negatives are very bad for both swimmers and security professionals.

Let’s consider the daily life of a security analyst. Alerts are generated constantly: a school of little false positives that the analyst eventually learns to ignore. Over time, these little fishies are tuned out and are just part of the environment. Occasionally, one of these might be a real threat, but most of the time, they are ignored. In contrast, the false negative – i.e. where everything seems fine, but in reality a real threat is lurking – can cause much more damage.

For example, a hacker might steal an employee’s credentials, create multiple admin-level accounts, and then spread malicious activity across those accounts. An analyst looking at logs for each account might believe that each one is fine. Only when linking them all to a single identity does the real picture become clear. We read about breaches nearly every day and in hindsight, it seems like someone should have been able to detect the harmful incident before is caused a breach. But in reality, it’s difficult to detect a real threat as it’s forming; there are too many false negatives, too many quietly lurking sharks.

Let’s look at three examples I’ve seen at customers this year. In each case, it wasn’t clear at the time whether there was an actual risky incident occurring, or just a set of coincidences and false positives. Judge for yourself:

Incident #1: Phish or Pshark?

At this company, multiple employees receive phishing emails that direct them to a fake Outlook Web Access (OWA) website, where they enter their credentials in order to log in and manage email. Hackers then harvest and use the stolen credentials to access the employees’ actual Outlook OWA accounts, and use those accounts to send spear-phishing emails to large groups of non-employees (i.e. with yahoo.com or gmail.com, etc. addresses). The recipients think the emails are real, since they come from known senders and a valid company address. The firm’s SIEM can’t detect this type of attack, so all looks well. But, using better detect techniques, the phish is exposed as a real shark.

Incident #2: Recon or code commit?

At a large e-commerce firm, the behavioral analytics system detects a user attempting to access a server fifty times per minute. Initially, analysts suspect some form of malware attempting to move laterally around the network – it’s a shark! Further investigation shows that this is actually a system administrator testing some new deployment scripts. All is well, and the shark turned out to be a harmless minnow.

Incident #3: HR, DBA, or… SHARK?

At a large retailer, an HR specialist accesses a variety of employee files sitting on multiple fileshares. During the same day, a DBA backs up a payroll database containing sensitive employee information such as social security numbers, dates of birth, addresses, etc. Initial analysis doesn’t show any obvious problem, since there’s an HR employee managing HR files, and a database admin managing databases – another minnow. The analytics system indicates a different picture, however.

The DBA account was used by the HR employee’s machine; she remotely accessed the payroll database using this DBA account. Analytics show that she had never accessed the database before, she’d never used that credential before, nor has anyone else in HR accessed that database directly before. Now the situation looks very different. In fact, the HR employee had her credentials stolen via malware before she went on vacation, and this operation was actually a hacker using the employee’s domain account to access the network and a stolen DBA credential to access the database. Yet another shark, pretending to be a minnow.

Do these examples seem far-fetched, or too real?

When companies begin uncovering the events that comprise an incident, they often discount the reality they are facing. This can’t happen, because we use two-factor authentication. This isn’t a problem, because I’m sure the DBA had a good reason for backing up those records. This shouldn’t happen, because we encrypt our data — and so on.

It’s not a surprise that security teams often ignore false positives. Alert overload is a real phenomenon, and people simply learn to ignore them. But it’s just as likely that the team will miss the false negative, and swim happily along until it’s too late and a breach occurs. It’s simply too difficult to connect the dots that point to an attack, until after the attack is complete.

Unlike the swimmer, security pros can’t afford to stay out of the water. Instead, they need better tools to help navigate the overload of signals that allow sharks to hide, often in plain sight.

Don't miss