Counterfit: Open-source tool for testing the security of AI systems

After developing a tool for testing the security of its own AI systems and assessing them for vulnerabilities, Microsoft has decided to open-source it to help organizations verify that that the algorithms they use are “robust, reliable, and trustworthy.”

security testing AI systems

Counterfit started as a collection of attack scripts written to target individual AI models, but Microsoft turned it into an automation tool to attack multiple AI systems at scale.

“Today, we routinely use Counterfit as part of our AI red team operations. We have found it helpful to automate techniques in MITRE’s Adversarial ML Threat Matrix and replay them against Microsoft’s own production AI services to proactively scan for AI-specific vulnerabilities. Counterfit is also being piloted in the AI development phase to catch vulnerabilities in AI systems before they hit production,” Will Pearce and Ram Shankar Siva Kumar from Microsoft’s Azure Trustworthy ML team explained.

About the Counterfit tool

Counterfit is a command-line tool that can be installed and deployed in a cloud or locally.

The tool is environment agnostic: the assessed AI models can be hosted in a cloud environment, on-premises, or on the edge.

“The tool abstracts the internal workings of their AI models so that security professionals can focus on security assessment. [It] makes published attack algorithms accessible to the security community and helps to provide an extensible interface from which to build, manage, and launch attacks on AI models,” Microsoft explained.

It can be used for penetration testing and red teaming AI systems (by using preloaded published attack algorithms), scanning for vulnerabilities in them, and logging (recording attacks against a target model).

Another plus is that the tool works on AI models using different data types (text, images, or generic input).

Fulfilling a need

Before open-sourcing it, Microsoft has asked partners in large organizations, SMBs, and governmental organizations to test the tool against their ML models in their environments, to make sure that it meets everybody’s needs.

“In the last three years, major companies such as Google, Amazon, Microsoft, and Tesla, have had their ML systems tricked, evaded, or misled,” MITRE recently noted, and said that we can expect more of those kinds of attacks in the future.

According to latest research by Adversa, the AI industry is generally unprepared for real-world attacks against AI systems.

Don't miss