Jacob Torrey is an Advising Research Engineer at Assured Information Security, where he leads the Computer Architectures group. He has worked extensively with low-level x86 and MCU architectures, having written a BIOS, OS, hypervisor and SMM handler. His major interest is how to (mis)use an existing architecture to implement a capability currently beyond the limitations of the architecture.
In this interview he talks about architectural tells that can be utilized to detect the presence of analysis tools, and offers practical tips for researchers.
What are some of the most interesting architectural tells that can be utilized to detect the presence of analysis tools?
Modern systems have been designed to provide the illusion of isolation between processes or virtual machines, however in order to improve the performance and minimize power consumption, there are weaknesses in these isolation boundaries.
Resources multiplexed or shared between processes will be prone to contention if there is another introspective process running and that can be exposed through timing, cache usage or other shared resource contention.
A great simple example is the CPUID instruction which when called in the absence of a hypervisor will not impact the cache and will take few clock cycles. If a hypervisor is present, even if it tries to hide itself by reporting the standard CPUID values, it will impact the cache and use far more clock cycles.
What tools can a researcher use to find out if he’s being monitored?
A good strategy starts with understanding the expected adversary and focusing on countering those techniques, there are no silver bullets in security. If your goal is to prevent basic virtualization or debugging, using rarely used CPU features may not work as expected under virtualization (such as switching to real-mode).
A stronger defense is needed with a trusted base if the adversary is able to detect these types of checks and augment the environment to provide real responses.
A great link to get started is the Paranoid Fish open source tool, which combines many of the techniques malware uses for sandbox or hypervisor detection into a single application and would be where I’d suggest people start.
What can a researcher do in order to make his insider access experience better?
Once you’ve established a foothold inside a system, your stealth goals are two-fold: not to get discovered, and if you are discovered, not to lose capabilities (“burned”). Most commercial AV systems will offload analysis to the cloud or manual analysis at the vendor’s site if there is a suspicious executable detected.
By tying each application to a host system’s unique fingerprint, analysis can be thwarted. Obviously this is a red flag to an organization that there are encrypted binaries that cannot be reversed, but the TTPs and capabilities will be less likely to be lost.