How science can fight insider threats

Malicious insiders pose the biggest cybersecurity threat for companies today because they can cause the most damage, and are much harder to detect than outsiders.

From the outside, attackers typically use automated hacking tools to perform reconnaissance until they find a way in. Once inside, they still need to surveil the network to find data that is worth exfiltrating. Insiders are already inside, so their workload is considerably less. They know exactly where sensitive company and customer data lies, and if they don’t possess the ‘keys to the kingdom’ themselves, they know who does and how to get them.

While most malicious insiders are motivated by financial gain, others have different agendas. Some are disgruntled employees who want to inflict damage on an organization or a fellow employee for a real or perceived wrongdoing. Typically, these individuals either target specific executives, by exposing inappropriate emails or peoples’ salaries, for example, or exacting whatever damage they can, such as deleting customer records.

Insiders leading source of compromised records

The scale of the insider threat problem is revealed in the 2018 Verizon Data Breach Investigations Report, which notes that 28 percent of all data breaches involved insiders.

While malicious outsiders (72 percent) were the leading source of data breaches, they accounted for only 23 percent of all compromised data, according to the report. On the other hand, insiders accounted for 76 percent of all compromised records.

In the healthcare segment alone, there were 750 incidents and 536 confirmed data disclosures. 18.4 percent of the incidents were caused by privilege misuse, with 47 percent of those attributed to fun, curiosity, and snooping. Financial gain was the reason for 40 percent of the incidents.

The healthcare industry has the dubious distinction of being the only vertical industry that has more internal actors behind breaches than external ones.

Well publicized examples of insider hacks include the recent Tesla breach. In a lawsuit against former employee Martin Tripp, the electric car maker said he, “has thus far admitted to writing software that hacked Tesla’s manufacturing operating system (‘MOS’) and to transferring several gigabytes of Tesla data to outside entities.” The suit also alleged that Tripp made false statements about the company to the media.

Target’s credit card data breach, which affected more than 70 million account holders, is an example of an outsider turned insider attack. Hackers stole network access credentials from a Target contractor and then used a lateral attack to exploit weaknesses in Target’s payment systems to steal personally identifiable information belonging to Target’s customers, including: names, phone numbers, emails, payment card details, credit card verification codes, and more.

How data science can help

Detecting insider threats using conventional security monitoring techniques is difficult, if not impossible.

For example, some companies deploy Data Loss Prevention (DLP) systems and web proxies to restrict users from being able to email attachments, use USB drives and from going to cloud based file sharing sites. However, applying these controls to every employee can have catastrophic impacts for a business. Not being able to send wire information between banks, or sharing a list of new employees with an insurance provider, etc. can slow operations to a snail’s pace.

Data science, however, provides a promising alternative.

The emerging field of security analytics uses machine learning technologies to establish baseline patterns of human behavior, and then applies algorithms and statistical analysis to detect meaningful anomalies from those patterns. These anomalies may indicate sabotage, data theft, or misuse of access privileges.

This can be accomplished by establishing a contextual linked view and behavior baseline from disparate systems including HR records, accounts, activity, events, access repositories, and security alerts. This baseline is created for the user and their dynamic peer groups.

As new activities are consumed, they are compared to the baseline behaviors. If the behavior deviates from the baseline, it is deemed an outlier. Using risk scoring algorithms, outliers can be used to detect and predict abnormal user behavior associated with potential sabotage, data theft or misuse.

Consider the following use cases for security analytics:

A manufacturing company applied security analytics to data from their SAP supply chain application as well as network and firewall logs. The machine learning algorithms detected that their product bill of materials had been accessed by a foreign nation, and had been compromised for more than 18 months without their knowledge. Meanwhile, the company’s share price dropped by 25% over a six month period due to lost sales to a lower priced competitive product. This is a clear example of account compromise.

On the preemptive side, security analytics can even identify employees that are a flight risk. Behavior models are available to predict if a user is planning to leave the company so they can be flagged as high risk to prevent data from being exfiltrated. Employers can use this information to place that individual into a more restrictive policy, that no longer allows them to use public cloud services like Dropbox or Google Drive, and with DLP can prevent them from using USB drives or sending emails with attachments.

With insiders accounting for two thirds of all compromised records, current approaches to data leak prevention are clearly not working. Data science on the other hand, which is transforming every segment of IT, can help organizations take back the keys to the kingdom.

Don't miss