The use of artificial intelligence can result in the production of deepfakes that are becoming more realistic and challenging to differentiate from authentic content, according to Regula.
Companies view fabricated biometric artifacts such as deepfake videos or voices as genuine menaces, with about 80% expressing concern. In the United States, this apprehension appears to be the highest, with approximately 91% of organizations believing it to be an escalating danger.
The increasing accessibility of AI technology poses a new threat: it may become easier for individuals with malicious intent to create deepfakes, amplifying the threat to businesses and individuals alike.
“AI-generated fake identities can be difficult for humans to detect, unless they are specially trained to do so. While neural networks may be useful in detecting deepfakes, they should be used in conjunction with other antifraud measures that focus on physical and dynamic parameters, such as face liveness checks, document liveness checks via optically variable security elements, etc.,” says Ihar Kliashchou, CTO at Regula.
“Currently, it is difficult or even impossible to create deepfakes that display expected dynamic behavior, so verifying the liveliness of an object can give you an edge over fraudsters. In addition, cross-validating user information with biometric checks and recent transaction checks can help ensure a thorough verification process,” Kliashchou continued.
At the same time, advanced identity fraud is not only about AI-generated fakes. 46% of the organizations globally experienced synthetic identity fraud in the past year. Also known as “Frankenstein” identity, this is a type of scam where criminals combine real and fake ID information to create totally new and artificial identities. It’s usually used to open bank accounts or make fraudulent purchases.
Obviously, the Banking sector is the most vulnerable to such kind of identity fraud. 92% of the companies in the industry perceive synthetic fraud as a real threat, and 49% have recently come across this scam.
Nowadays, to prevent the majority of current identity fraud, companies should enable document verification in addition to comprehensive biometric checks.
Thorough ID verification
It’s vital to enable extended document verification when proving someone’s identity remotely. A company should be able to establish the widest range of authenticity checks comprising all the security features in IDs.
Even in a zero-trust-to-mobile scenario with NFC-based verification of electronic documents, chip authenticity can be reverified on the server side, which is currently the most secure method to prove that the document is genuine.
Moreover, for those running an international business, utilizing a comprehensive document template database that includes a wide range of templates from numerous countries and territories is crucial. This helps organizations from any part of the world to validate and authenticate nearly any identity document, whether on-site or remotely, preventing fraud and mitigating security risks.
The indispensable second half of the process is liveness verification to prove that no malefactor is trying to present non-live imagery (a mask, printed image, or digital photo) during this check. It should even go further: biometric verification solutions should also match a person’s selfie with their ID portrait and any database an organization utilizes to ensure that the person is the same.
To ensure that fraudsters cannot reuse users’ liveness sessions for tampering, the enrollment process for every company’s requirements should be set up with unique parameters. The solution should also support multiple attributes to bind to a photo, such as a person’s name, age, gender, driver’s license number, credit score, etc., for a more reliable and secure enrollment process.
Overall, currently, an effective identity verification process involves utilizing a combination of techniques, along with the widest scope of cross-validations of a user’s information and attributes.