Jon Clay is the Senior Core Technology Marketing Manager at Trend Micro and in this interview he discusses current testing methodologies in the security industry, their good sides and shortcomings. He closes by outlining methodologies he’d like to see established and offers advice to developers of security products.
Based on your experience, what are the most significant issues surrounding current testing methodologies in the security industry?
The current methodology for testing security solutions has been around for over 15 years and was developed back when only a few malicious files were propagating. Today we see a new malicious file every 2 seconds and the lifecycle of each of these files is anywhere from a few hours to a day. Also, the methods used by cybercriminals have changed over time.
Before we used to see infections coming from emails with attached malicious files which would infect the machine, but today the vast majority of threats are via the Internet and more commonly from web browsing. With all these changes in the threat landscape you would think that the industry would use newer testing methodologies in order to give a user a better way to identify the best solutions to today’s threats. But the reality is many of the testing labs still use outdated testing methods which consist of installing the security product on a PC, updating it so it has the latest signatures, then isolating it from the network. Once the machine is isolated they then drop a corpus of files onto the machine to determine how many can be caught by the vendors file scanning technology. They do add legitimate files to the corpus to ensure an acceptable false positive rate, but the majority of files are confirmed malicious. These files are typically sourced by the testing vendor and are anywhere from 1 month to a year old.
This type of testing was acceptable when there were only few “in-the-wild” viruses causing issues, but today as stated above, we’re seeing over a million new malicious files every month and so these traditional tests are unable to test the most recent malicious files nor do they test the main threat vector for infection. Another little known fact is that the cybercriminals have anti anti-malware services available to them through their underground community whereby the service uses similar tests to ensure the new malicious file is not detected by any vendor’s signatures. This change in the threat landscape has required many vendors to develop additional threat technologies that can block the threats at different locations during the propagation of the threat. Since many of these new technologies are not utilized during these tests, the results do not provide the user with visibility into how robust security solution can be.
Which products are most affected by these shortcomings?
If you look at the trade magazines and websites that provide testing results to their audience you will see the traditional tests are performed most on endpoint security products, both consumer and business solutions. The endpoint security solutions today are providing much more than just signature-based file scanning technologies. Firewalls, web filtering, HIPS, behavior-based scanning and whitelisting are examples of the multiple layers of defense many of these solutions provide, yet all we see in the test results are how many malicious files are detected by the solution. Many of today’s threats can be blocked using other technologies than signature scanning.
What kind of testing methodologies would you like to see established?
A few fundamental principles should be applied when testing endpoint solutions. Note that there is a consortium of vendors currently working together to develop new testing methodologies. AMTSO (Anti-Malware Testing Standards Organization) was formed to develop a set of new principles. Below are only a few of these but I believe are key to ensuring accurate, unbiased testing.
1. Giving products credit for preventing infections regardless of how this is achieved by the security product (e.g., (i) blocking access to the source bad URL of a malware download, (ii) detecting malicious code in a downloaded file, (iii) closing vulnerabilities that render malicious code impotent).
2. Using a corpus of test threats that includes recent unknown files and their source URLs. These should include both malware and legitimate files so that both protection and false positive avoidance are measured for the same product settings. Ideally, the corpus of threats would be balanced to reflect the same demographics and prevalence’s of the actual threats encountered by the customers of the security products being tested. (Lack of recent sample files is the key drawback of current testing methods.)
3. During testing, expose security products to the same corpus of threats repeatedly at regular intervals to measure the time it takes for security products to decide whether unknown files and their source URLs are either malicious or safe to download.
4. The machines used in the test should have full access to the network and Internet to allow solutions that utilize cloud-based technologies to be supported. Also, all the solutions should be tested in parallel to ensure no vendor has more time to provide protection from the corpus of threats introduced.
The above are only a few principles recommended, AMTSO has provided a full 9 principles that should be taken into account.
What would it take for these methodologies to become a standard?
The independent testing labs, as well as the trade journals and industry analysts citing results, need to insist that, regardless of practical implementation, security testing methodologies adhere to some basic principles that reflect the actual security provided to customers by commercial security products. Work with security product vendors to assure that the testing methodology actually measures the effectiveness of whatever protection technology the security product uses. AMTSO needs to come to a consensus and strongly recommend vendors adopt the principles agreed upon by the members of that organization. There is also a decisive role media has in this process:
- Educating the audience in order to remodel the conception of 100% security vs. 100% detection. Traditional detection rate tests do not accurately reflect the full ability of a solution to provide the best possible security for today’s threats.
- Highlight those tests that follow at least the above 4 points (as the 9 AMTSO principles), even if these tests are more expensive. Also we need to accept the fact that the tests take a longer time to be done.
- Talk openly about the fact that the quality and quantity of the sample set used is a very important factor which can influence the test results. The samples used for a test that follows the 4 points above, need an accurate validation.
Do you have any advice for security product developers?
The amount of malware being created and delivered via multiple threat vectors requires vendors to develop new technologies that can improve their abilities to source, analyze and provide protection in the fastest possible means. Also, multiple layers of technologies are needed as only one is not enough in today’s endpoint environment. For those vendors that support signature-based scanning, do not simply blacklist files sent from known vendors, analyze them to ensure they are in fact malicious. This practice has caused many of the recent rash of false positives since some of these vendors are trying to obtain the highest possible detection ratings without regard to false positives. Last, get involved in the adoption of new testing standards to ensure all of your technologies are utilized during tests.