Polymorphic security warnings more effective than same, static ones

In the last year or so, we have witnessed Google becoming increasingly interested in providing effective warnings that would spur users into making good decisions regarding the security of their computers and their information.

Other researchers have also been looking into the subject, analyzing the text contained in the warnings and trying to figure out a pattern that would capture the users’ attention and help them make the right choices. Others tried repositioning the options on the warning dialog.

The latest research on the subject comes from a group of researchers from Brigham Young University, University of Pittsburgh, and Google, who used functional magnetic resonance imaging (fMRI) to see whether polymorphic warnings could prevent users to becoming accustomed to warnings, disregard them automatically and simply click through them.

The haven’t changed the text of the warnings, but have played with the color of the alert and the text, symbols, size of the warning, and made them twirl and jiggle (click on the screenshot to enlarge it):


The results were positive: polymorphic warnings help reduce “habituation in the brain,” making users more likely to pay attention to the warnings and not dismiss them outright. This approach also slows the rate of habituation.

They discovered that the most effective polymorphic variations were those where the warnings were animated (jiggled or zoomed in), where the color of the window was repeatedly changed, and where symbols were used.

But what they are most satisfied with is the fact that they have illustrated the usefulness of applying neuroscience to the domain of security.

“Because automatic or unconscious mental processes underlie much of human cognition and decision making, they likely play an important role in a number of other security behaviors, such as security education, training, and awareness (SETA) programs, password use, and information security policy compliance. Additionally, neuroscience methods have the potential to lead to the development of more complete behavioral security theories and guide the design of more effective security interventions,” they noted.