IBM researchers have created the Adversarial Robustness Toolbox, an open-source library to help researchers improve the defenses of real-world AI systems.
Attacks against neural networks have recently been flagged as one of the biggest dangers in our modern world where AI systems are increasingly getting embedded in many technologies we use and depend on daily.
Adversaries can sometimes tamper with them even if they don’t know much about them, and “breaking” the system could result in very dangerous consequences.
“With the Adversarial Robustness Toolbox, multiple attacks can be launched against an AI system, and security teams can select the most effective defenses as building blocks for maximum robustness. With each proposed change to the defense of the system, the ART will provide benchmarks for the increase or decrease in efficiency,” Dr. Sridhar Muppidi, IBM Fellow, VP and CTO IBM Security, explained.
About the library
The Adversarial Robustness Toolbox contains implementations of a number of attack and defense methods.
The library is written in Python, as it is the most commonly used programming language for developing, testing and deploying Deep Neural Networks.
“This first release of the Adversarial Robustness Toolbox supports DNNs implemented in the TensorFlow and Keras deep learning frameworks. Future releases will extend the support to other popular frameworks such as PyTorch or MXNet,” IBM pointed out and noted that, at the moment, the library is primarily intended to improve the adversarial robustness of visual recognition systems.
Future releases will include adaptations to other data modes (speech, text or time series).
“With any new technology, the right course of action is to explore the strengths and weaknesses to improve the benefits to society, while ensuring that every effort is taken to maximize privacy and security. IBM believes in developing technology in mature, responsible, and trustworthy ways,” the company concluded.