An increasing number of people around the world are being displaced by factors such as climate change, natural disasters, and wars. The people attempting to cross international borders may be doing so under significant pressures, and with little time for preparation.
Increasingly, biometric data is being collected from people who are entering countries as refugees. Although the collected information varies from country to country, within the UK fingerprints and a facial image are collected as standard. By setting this requirement for people wishing to enter a country urgently, are we stripping people of the right to privacy? Are people in this sometimes “life or death” situation able to provide informed consent for the use of their biometric data?
First, we need to understand how biometrics can be used for identification.
Although people may use the term “biometric identification”, biometric systems are more commonly used for biometric authentication. Identification is the process of saying who you are, such as an event bouncer asking for your name. You can of course lie and provide an incorrect name. The bouncer may then choose to verify the identification you have provided, e.g., by asking to see your driving license and ensuring that the name and face on it match yours. If you were to step out and attempt to reenter the event, the bouncer may remember your face. They would then be able to authenticate you as being who you said you were, just by recognizing you. There’s no need to re-identify yourself, and no need to verify your identity again when your face within the bouncer’s memory has become your method of authentication.
Unfortunately, this solution does not scale well. Most humans cannot recognize two million individual human faces. Plus, having 24-hour bouncer coverage for your application would be expensive. This is where automated authentication systems may take over.
Commonly, biometrics can be used as a replacement for a username or a password. In some cases, they can be used as both; for example, an access control system where you scan a fingerprint, but no other identification takes place.
Deciding which factor biometrics should replace requires consideration of how “secret” a biometric modality is. This is a conundrum in many industries. The long number seen on flight tickets, technically, should be kept secret. It can be used to access information associated with the flight booking and, in an interesting case, to gain access to the contact information of the person taking the flight. Despite the potential sensitivity of the data, people will frequently post unobscured photos of their tickets when going on holiday. Similarly, people may post high-resolution photographs of their faces on social media. If a biometric system relied solely upon a flat image of a person’s face, would this be the equivalent of posting a password on social media?
Due to technical advances, a photo of a face is generally (and thankfully) not sufficient to bypass security systems. Hyperspectral cameras are commonly in use which will easily detect a real human vs a photograph. 3D imaging can also be used here, as well as liveliness detection mechanisms such as requiring the person to make a specific expression to be authenticated. Plus, people will always want to share photos of themselves on social media. Resisting the urge to post photos of a plane ticket has a much lesser impact upon one’s self-expression than to ask a person to never post a photo of their face online.
Other modalities, such as fingerprints, can more readily be recreated from photographs. When a fingerprint consists of a set of ridges, the manual recreation of such ridges is much more simplistic than creating a true copy of a human face. Similarly, secondary liveliness detection methodologies for fingerprint technologies may be more foolable. For instance, a thin fake fingerprint could be worn over the top of a person’s finger to recreate the pulse and warmth that a system may attempt to detect. As with facial images, social media can provide attackers with valuable fingerprint images – any sufficiently high-resolution photograph of someone making the two-fingered “peace” sign could yield the needed data.
With this in mind, biometric modalities should not be used for single-factor authentication. Perhaps they should not even be considered as a form of password. But with such complexities surrounding their use, can we have faith in the use of biometrics for border controls?
E-passport gates at airports have become commonplace. You stand in the holding pen with your passport on a scanner, and a camera shines a light in your face and takes a photograph. If you are like me, you are then inevitably forwarded to be verified by a human operator. In this system, no new data is being handed over – the system is solely verifying the identification that you have provided so that you may be authenticated. Within Europe and Great Britain, this process tends to be well understood. You understand what you are submitting and why. The opt-in nature of this process gives people much more control over the use of their biometric data.
But what about those who are facing border controls due to an emergency?
The trend towards biometric data collection at borders has likely been driven in part by the ease of collection. People who are fleeing wars or other emergencies may not have all their identification with them. Being able to uniquely identify people without the need for paperwork is very appealing, especially when people may need to be identified repeatedly through multiple stages of the asylum-seeking process. Collection of data such as fingerprints can also be appealing to border security forces. For example, fingerprints collected from Improvised Explosive Devices within a war zone could be shared with border control systems, flagging potential matches. In this case, further investigation into the person attempting to claim asylum may be appropriate.
Whilst the collection of fingerprint data is very convenient for the border control forces, how convenient is it for the asylum seekers themselves? Could they be opening themselves up to greater risks by providing their data?
A potential issue here is the amount of trust that people place in fingerprints. People assume that fingerprints are an infallible method of identification. Whilst the chance of two people having matching fingerprints is infinitesimally small, automated matching systems often do not make use of the entire fingerprint. Different levels of detail can be used in matching, with differing levels of reliability.
When asked to provide your fingerprints for identification purposes, how often do we consider how the matching is performed? Whilst standards exist for the robustness of fingerprint matching when used within the Criminal Justice System, can we assume that the same standards apply to border control systems? Generally, the fewer comparison points to be analyzed, the faster the matching system; in a border control situation where a large quantity of people are being processed, it is important to understand how much of a trade-off between speed and accuracy has occurred. If the scales have been tipped too far, there is a very real risk that a person would be incorrectly flagged as a terrorist and be returned to a country where they may face death.
Data sharing between nations means that such a mistake is made in one country has the potential to propagate to others. Whilst the UK appears to only share a small subset of fingerprint data collected at borders with other Five Eyes nations, a person who is rejected at one border through error could see themself rejected at other borders mistakenly. A person could be blocked from seeking asylum around the world based on one erroneous fingerprint match. Whilst automatic fingerprint matching at the border has great potential to keep nations safe (and has successfully been used to block the entry of dangerous persons in the past), the technology mustn’t be seen as infallible.
Fingerprint data collected at borders is often more complete than the data being taken by other applications. When you enroll your fingerprint to unlock your phone, the phone is generally not storing the entire fingerprint; it is encoding details of the key points that will need to be used for matching. Border controls may capture a complete fingerprint.
In the past, this was accomplished by taking an ink print of the finger, whereas now it is completely digital. The potential privacy implications of such captures are much greater. If an attacker was to gain access to the encoded fingerprint data used to unlock your phone, they would likely not be able to use the data for other applications, as different systems will make use of different encodings and fingerprint details.
Should you gain access to a complete fingerprint, you can use the data to compromise any other authentication system which relies upon fingerprint data to act as a password or sole authentication factor. We are asking people travelling through borders to place a lot of trust in the security of the collected data. For asylum seekers coming from countries with more restrictive governments, providing such data could cause serious anxiety.
There are various risks associated with the collection, storage, and usage of biometrics data. But will the average asylum seeker know this? Even if they did, would they feel like they had a choice in providing the information? Handing over biometric data may seem like a small price to pay for the potential safety stemming from successfully gaining asylum in another country. Has private data become the price that people must pay to seek safety?
The use of fingerprints to identify people at borders has the potential to increase the speed at which asylum seekers can be processed, and to detect threats to national security which may otherwise have gone unnoticed. These are of course good things. But the people going through these places are forced to place a lot of trust in the collecting country to handle their data accurately and securely. The people and organizations creating this technology have a duty to not let these people down. To not take shortcuts in the accuracy of systems, just to process people a little faster. To ensure that data is stored as securely as possible. To make sure that data is not maliciously used for other purposes outside of its intended scope.