How meaningful are AV tests?

Anti-malware software from vendors big and small is regularly tested by several antivirus test labs, which may result in awards and certification, but also in disappointment.

Also, we’ve often seen a piece of AV software performing great on tests devised by one testing organization, and poorly on those set up by another one. But is there a way to determine the importance of measured differences between product performance, and whether a test is meaningful?

In this podcast recorded at Virus Bulletin 2013, Dr. Richard Ford from the Florida Institute of Technology talks about his team’s attempt to conduct a meta-analysis of a month’s worth of anti-malware tests by using techniques commonly found in other disciplines, and shares the conclusions they drew from this research.

Listen to the podcast here.


Dr. Richard Ford the co-director of the Harris Institute for Assured Information, and the Harris Professor of Assured Information at the Florida Institute of Technology. His research interests include biologically-inspired security solutions, rootkit detection, novel exploit paths, resilience, security metrics, and malware prevention.

Ford is an Editor of Reed-Elsevier’s Computers & Security, a Consulting Editor of Virus Bulletin, and co-editor of a column in IEEE Security & Privacy. Ford is a member of CARO, and the President/CEO of AMSTO, the Anti-Malware Testing Standards Organization.

Don't miss