At a recent panel discussion held by Kaspersky Lab in Prague, researchers raised a problem over the way antivirus products are tested and presented to the public as the malware landscape continually evolves.
“It is a problem, it is a very huge problem, because the way testing is being conducted, most of it is not useful to anybody,” TechEye was told by Jose Fernandez, a professor at the École Polytechnique de Montréal and a member of the advisory board of the Anti-Malware Testing Standards Organization (AMTSO).
With the continually evolving threat seen in the malware landscape, it’s the opinon of Fernandez, and members of the Kaspersky Lab development team, that the way antivirus products are tested by reviewers is no longer effective in painting a true picture of the strength of the software.
Acccording to Fernandez, a problem in virus protection is that users themselves are becoming part of how a system is potentially vulnerable. Then there’s the wave of new threats along with increasing sophistication, as criminals seek to wrench money from the unsuspecting web user.
“The threat has evolved, many tests were good to do up to ten years of five years ago, if you could stop it at the file open stage then it was okay. But that is no longer true today for a variety of reasons,” Hernandez said.
For example, these days hackers are able to bypass signature based defence more easily.
Fernandez was keen to have a pop at journalists going over antivirus software: “It is absolutely correct that things have changed so much that the old way of thinking is no longer working, but I do not see that evolution yet in journalists for example, because the problem has changed.
“And more importantly you need to re-educate the user. You used to be able to check the machine that protects the user, but not anymore because the errors, the mistakes, and the accidents are mostly provoked by the user.
“So, you have to test the machine with a user in it to see how that machine adapts to the driver, how the machine is allowing the driver to make the right decisions,” he said.
Fernandez believes that you cannot test them separately any more because attackers are targeting the “driver”, to “click on this link or install this codec of Prince William’s wedding, you cannot physically stop them, so we need to educate within the product, which some are now doing such as Kaspersky”.
He thinks many journalists are going about testing the wrong way, breeding a culture of misinformation which is driving a need to meet narrowly defined notions of how strong a product can be.
Where reviewers are going wrong, he thinks, is in the way that they seek to concentrate on testing which shows that one product is king and which are runner-ups.
Of course, this is a natural way for consumers to react to products by getting easily assimilated information rather than bogged down in technically heavy statistics.
While the knowledge that one product is designated to be ranked higher may not be the most accurate tool, it is also the easiest for consumers to understand at a glance. That’s something Fernandez and Kaspersky Lab believe is an inadequate way of testing anti-virus products.
But it is the perception on this side of the industry that because of such testing, there is a consequential vicious circle in which the marketing departments of AV companies know that they have to meet certain tests to receive high marks in magazines.
Without that they could lose market share.
Fernandez says marketing teams hold most of the testing budget in many firms. They put pressure on developers to ensure that products do well in tests that are, according to him, largely outdated.
“The problem with this is that doing this does not then meet the other criteria and doesn’t allow them to concentrate on improving the product,” he says.
“Testing is never perfect, tests have different parameters, but most journalists don’t want to show the limits of their test.
“This gives a user the wrong impression that one test is the best, one is second best, when this is maybe not the case.
“For most of the tests done today this is not true because of the evolving threat landscape performance with these test is totally unrelated to total security.”
Fernandez believes that conducting tests accurately and then choosing the right measurements is a challenging technically, as there is so much that can go wrong and it takes so much expertise.
“It’s actually very time consuming and costly, so it does not make much sense for people to say that ‘because I cannot do this expensive test I will just do ‘a’ test’ and then just publish it.’
“A lot can go wrong, for example, in choosing what to measure and actually getting accurate results for the test.
“And that is why it should be left to the professionals, at least for some tests.
Unless the publishing industry is willing to invest time and money for people to be testing full time and developing expertise, it’s certainly a view from the security industry that adequate testing can’t be achieved at a sufficient level.
“I don’t think it is a good idea for a technical journalist to say ‘this week I am doing spell checkers and this week I am doing antivirus,” says Fernandez.
Instead he advocates using independent testing laboratories to garner more accurate results, and for hacks to look more at the usability of a product.
But one journo tells TechEye that testing is most certainly still important by reviewers as they offer a truly independent opinion on how well the product works, and that “if they can’t see the difference, then the great unwashed are unlikely to either”.
However at the École Polytechnique de Montréal, Fernandez has just begun new research into alternative methods for testing how useful a certain product is as the threat increases from the user themselves.
“The clinical trials are not seeking to do individual experiments, we are trying to do in vivo experiments where we have real users using the product in their everyday lives, for say four months, do university work, download whatever you want.
“The software reports on what happens, and we find indicators of infection we will look at that and see what the cause is.
“We are doing this with fifty people in the first run, and we are trying to find out if we can correlate infection with certain pattern behaviours.”
Ultimately, Fernandez says, such difference in user behaviour and difference in how the user actually works with the product will make much bigger difference than “whether it is Kaspersky and McAfee”.