Researchers defeat facial recognition systems with universal face mask

Can attackers create a face mask that would defeat modern facial recognition (FR) systems? A group of researchers from from Ben-Gurion University of the Negev and Tel Aviv University have proven that it can be done.

defeat facial recognition

“We validated our adversarial mask’s effectiveness in real-world experiments (CCTV use case) by printing the adversarial pattern on a fabric face mask. In these experiments, the FR system was only able to identify 3.34% of the participants wearing the mask (compared to a minimum of 83.34% with other evaluated masks),” they noted.

A mask that works against many facial recognition models

The COVID-19 pandemic has made wearing face masks a habitual practice, and it initially hampered many facial recognition systems in use around the world. With time, though, the technology evolved and adapted to accurately identify individuals wearing medical and other masks.

But as we learned time and time again, if there is a good enough incentive, adversaries will always find new ways to achieve their intended goal.

In this particular case, the researchers took over the adversarial role and decided to find out whether they could create a specific pattern/mask that would work against modern deep learning-based FR models.

Their attempt was successful: they used a gradient-based optimization process to create a universal perturbation (and mask) that would falsely classify each wearer – no matter whether male or female – as an unknown identity, and would do so even when faced with different FR models.

This mask works as intended both when printed on paper and or fabric. But, even more importantly, the mask will not raise suspicion in our post-Covid world and can easily be removed when the adversary needs to blend in real-world scenarios.

Possible countermeasures

While their mask works well, theirs is not the only “version” possible.

“The main goal of a universal perturbation is to fit any person wearing it, i.e., there is a single pattern. Having said that, the perturbation depends on the FR model it was used to attack, which means different patterns will be crafted depending on the different victim models,” Alon Zolfi, the PhD student who led the research, told Help Net Security.

If randomization is added to the process, the resulting patterns can also be slightly different.

“Tailor made masks (fitting a single person) could also be crafted and result in different adversarial patterns to widen the diversity,” he noted.

Facial recognition models can be trained to recognize people wearing their and similar adversarial masks, the researchers pointed out. Alternatively, during the inference phase, every masked face image could be preprocessed so that it looks like the person is wearing a standard mask (e.g., blue surgical mask), because current FR models work well with those.

At the moment, FR systems rely on the entire facial area to answer a query whether two faces are of the same person, and Zolfi believes there are three solutions to make them “see through” a masked face image.

The first is adversarial learning, i.e., training FR models with facial images that contain adversarial patterns (whether universal or tailor-made).

The second is training FR models to make a prediction based only on the upper area of the face – but this approach has been shown to degrade the performance of the models even on unmasked facial images and is currently unsatisfactory, he noted.

Thirdly, FR models could be trained to generate lower facial area based on the upper facial area.

“There is a popular line of work called generative adversarial network (GAN) that is used to generate what we think of as ‘inputs’ (in this case, given some input we want it to output an image of the lower facial area). This is a ‘heavy’ approach because it requires completely different model architectures, training procedures and larger physical resources during inference,” he concluded.

Don't miss