The fight to stymie adversarial machine learning is on

The use of machine learning (ML) technology is booming. This development is being driven by the many immediate gains that can be achieved using machine learning models in diverse domains, from image recognition to credit risk prediction.

adversarial machine learning

However, just like the boom in software development and the Internet attracted hackers that leveraged vulnerabilities in software to subvert it, so does machine learning. With the rise of ML-based security solutions, malware authors are beginning to employ adversarial machine learning to effectively evade them. Like software, machine learning (including deep learning models) are susceptible to exploits as hackers seek to achieve their malicious objectives, like stealing data from users. “Adversarial attack” is the general term for exploiting machine learning vulnerabilities.

Adversarial machine learning is a technique aimed at deceiving the ML model by providing specially crafted input to fool the AV into classifying the malicious input as a benign file and evade detection. A veritable cyber arms race is on, in parallel with the development of adversarial machine learning, the producers of ML-based cybersecurity solutions are investing considerable effort into anticipating and researching adversarial techniques, so they can mitigate this risk. Between them they even hold open ML model evasion challenges.

There is great impetus to expand the knowledge that we have not just on the machine learning models that we use, but the adversarial attacks made against them. The level of knowledge that we currently have about adversarial attacks is scarce, even among veteran machine learning practitioners in the industry. In a survey of 28 organizations spanning small as well as large organizations, 25 organizations did not know how to secure their machine learning-based systems.

As one of the leading cybersecurity companies applying deep learning to cybersecurity, Deep Instinct continues to play a significant role in advancing adversarial machine learning research. In fact, the company identified the droppers that were used in a highly widespread Emotet attack that was able to routinely avoid detection by machine learning models.

To evade the ML model that lies at the heart of an NGAV, Emotet’s coders came to an extremely easy and effective technique. Since most of the classic ML-based AVs classify files based on the presence of benign and malicious features, Emotet’s executable files contain a large portion of benign code that is not part of its functionality, and is never executed, but is used to obscure the malicious features. The malicious code is effectively “camouflaged” by the inordinate number of benign features that gets scanned without alerting any alarm.

Deep Instinct’s deep learning PhD experts have shared their knowledge of adversarial attacks towards the development of the machine learning threat matrix, a project that was led by Microsoft and builds on the widely used MITRE AT&ACK framework. By contributing their knowledge that they have acquired to develop Deep Instinct’s deep learning products in the cyber security domain, they have applied this information to the development of the matrix so other practitioners can leverage this knowledge base and close their information gap. Over the course of this year, they participated in the defining and outlining of various attack vectors and methodologies used in adversarial machine learning.

The Adversarial Machine Learning Threat Matrix aims to equip security analysts with the knowledge that they need to combat this adversarial frontier. Just like the widely used MITRE ATT&CK matrix maps methods commonly used by hackers to subvert software, the adversarial machine learning threat matrix maps techniques used by adversaries to subvert machine learning models.

The matrix covers three different hacker objectives; to steal IP (such as data about the model), to fool it (by causing the attacked model to misclassifying samples), or to exhaust the resources of the prediction system (similar to a denial-of-service attack in the cyber security domain). By using this threat matrix, machine learning practitioners can understand the risks they are facing and better yet, anticipate the steps that their adversaries are likely to take.

Machine learning has led to ground-breaking innovations in cybersecurity and other fields affecting our day-to-day lives. Like a classic cat and mouse race, new technology intended to deliver tangible benefits, will always spark the interest of hackers to manipulate it with never-seen-before methods. Adversarial machine learning is merely the latest attempt in that evolutionary journey.

Don't miss