Microsoft and MITRE developed a tool to prepare security teams for attacks on ML systems

A new plug-in, created by Microsoft and MITRE, integrates various open-source software tools to aid cybersecurity professionals in bolstering their defenses against attacks on machine learning (ML) systems.

The Arsenal tool implements tactics and techniques defined in the MITRE ATLAS framework and has been collaboratively built off of Microsoft’s Counterfit as an automated adversarial attack library so security practitioners can accurately emulate attacks on systems that contain ML without having a deep background in ML or artificial intelligence (AI).

“Bringing these tools together is a major win for the cybersecurity community because it provides insights into how adversarial machine learning attacks play out,” said Charles Clancy, Ph.D., SVP, GM, MITRE Labs, and chief futurist. “Working together to address potential security flaws with machine learning systems will help improve user trust and better enable these systems to have a positive impact on society.”

The collaboration with Microsoft on Arsenal is just one example of MITRE’s efforts to develop a family of tools addressing issues including trust, transparency, and fairness to better enable use of ML and AI systems for mission-critical applications in areas ranging from healthcare to national security.

Microsoft’s Counterfit is a tool that enables ML researchers to implement a variety of adversarial attacks on AI algorithms. MITRE CALDERA is a platform that enables creation and automation of specific adversary profiles. MITRE ATLAS, which stands for Adversarial Threat Landscape for Artificial-Intelligence Systems, is a knowledge base of adversary tactics, techniques, and case studies for ML systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research.

The Arsenal plug-in enables CALDERA to emulate adversarial attacks and behaviors using Microsoft’s Counterfit library.

“While other automated tools exist today, they’re typically better suited to research that examines specific vulnerabilities within an ML system, rather than the security threats that system will encounter as part of an enterprise network,” Clancy said.

Creating a robust end-to-end ML workflow is necessary when integrating ML systems into an enterprise network and deploying these systems for real-world use cases. This workflow can become complex, making it difficult to identify potential and legitimate vulnerabilities of the system. The integration of the Arsenal plug-in into CALDERA allows security professionals to discover novel vulnerabilities within the building blocks of an end-to-end ML workflow and develop countermeasures and controls to prevent exploitation of ML systems deployed in the real world.

“As the world looks to AI to positively change how organizations operate, it’s critical that steps are taken to help ensure the security of those AI and machine learning models that will empower the workforce to do more with less of a strain on time, budget and resources,” said Ram Shankar Siva Kumar, principal program manager for AI security at Microsoft. “We’re proud to have worked with MITRE and HuggingFace to give the security community the tools they need to help leverage AI in a more secure way.”

The tool currently includes a limited number of adversary profiles based on information publicly available now. As security researchers document new attacks on ML systems, Microsoft and MITRE plan to continually evolve the tools to add new techniques and adversary profiles.

Share this