As defensive technologies based on machine learning become increasingly numerous, so will offensive ones – whether wielded by attackers or pentesters.
The idea is the same: train the system/tool with quality base data, and make it able to both extrapolate from it and improvise and try out new techniques.
Finding and exploiting vulnerabilities
At this year’s edition of DEF CON, researchers from Bishop Fox have demonstrated DeepHack, their own proof-of-concept, open-source hacking AI.
“This bot learns how to break into web applications using a neural network, trial-and-error, and a frightening disregard for humankind,” they noted.
“DeepHack works the following way: Neural networks used in reinforcement learning excel at finding solutions to games. By describing a problem as a ‘game’ with winners, losers, points, objectives, and actions, a neural network can be trained to be proficient at ‘playing’ it. The AI is rewarded every time it sends a request to gain new information about the target system, thereby discovering what types of requests lead to that information,” the company explains.
Apparently, DeepHack does not need to have any prior knowledge of apps or databases, and based on a single algorithm, it learns how to exploit multiple kinds of vulnerabilities.
“AI-based hacking tools are emerging as a class of technology that pentesters have yet to fully explore. We guarantee that you’ll be either writing machine learning hacking tools next year, or desperately attempting to defend against them,” the researchers concluded.
Bypassing antivirus software
At the same conference, Hyrum Anderson, Technical Director of Data Science at Endgame, explained how an AI agent trained through reinforcement learning to modify malware can successfully evade machine learning malware detection.
Most next-generation antivirus software relies on machine learning to generalize to detect never-before-seen malware. As DeepHack, Anderson’s AI agent was able to “learn” by playing thousands of “games.”
Its “opponent” was a next-gen AV malware detector, and with each game the agent was closer to creating a solid idea of which sequence of functionality-preserving changes it can perform on a Windows PE malware file in order for it to bypass the detector.
Their final results were modest – only 16 percent of the customized samples lobbed at the AV got through – but Anderson believes others could do better.
The AI agent takes advantage of the OpenAI Gym toolkit, and Endgame has released the malware manipulation environment on GitHub.