Digital trust is cracking under the pressure of deepfakes, cybercrime

69% of global respondents to a Jumio survey say AI-powered fraud now poses a greater threat to personal security than traditional forms of identity theft. This number rises to 74% in Singapore, with 71% also indicating that AI-generated scams are harder to detect than traditional scams.

AI-powered fraud threat

Rising AI concerns erode digital trust

69% of global consumers indicated they are more skeptical of the content they see online due to AI-generated fraud than they were last year. Just 37% of consumers said they more strongly believe that most social media accounts are authentic compared to last year, and only 36% claimed they were more trusting of news they encounter online, despite the possibility of encountering deepfakes or manipulated content.

The majority of respondents also cited day-to-day worries around a number of AI-powered fraud tactics, including:

  • Fake digital IDs generated with AI (76% globally, 84% in Singapore)
  • Scam emails using AI to trick people into giving away passwords or money (75% globally, 82% in Singapore)
  • Video and voice deepfakes (74% globally, 83% in Singapore)
  • Being fooled by manipulated social media content (72% globally, 81% in Singapore)

This indicates that consumers increasingly recognize the risks of conducting life and business online, but may lack the tools or evidence needed to identify secure, authentic content.

“As generative AI continues to lower the barrier for sophisticated scams, Jumio’s findings highlight an urgent need for businesses to rethink digital identity protection — not only to reduce fraud, but also to preserve customer trust and digital engagement itself,” explained Bala Kumar, chief product and technology officer at Jumio.

Consumers trust themselves most to guard against AI-powered fraud

In the absence of strong regulatory protections, consumers are taking matters into their own hands. When asked who they trust most to protect their personal data from AI-powered fraud, 93% said themselves, far more than those who trust government agencies (85%) or big tech (88%). But self-reliance does not mean consumers want to go it alone. In fact, when asked who should be most responsible for stopping AI-powered fraud, 43% pointed to big tech, compared to just 18% who chose themselves.

The study identifies this trust gap as symptomatic of evolving threat landscape, where fraud-as-a-service (FaaS) ecosystems flourish across dark web marketplaces. These plug-and-play toolkits enable even novice fraudsters to launch sophisticated attacks using synthetic identities, deepfake videos, and botnet-driven account takeovers.

This shift is forcing companies to modernize fraud defenses and rethink how they protect consumers in an AI-driven world. In parallel, Jumio’s research found that consumers are open to the additional steps this may require. Most respondents globally said they would be willing to spend more time completing comprehensive identity verification processes, especially in sectors where stakes are high, like banking and financial services (80%), government services (78%), and healthcare (76%).

“Our industry must develop the tools we need to stay ahead of the AI-fraud arms race, because traditional identity verification isn’t going to cut it anymore,” concluded Jumio CEO Robert Prigge.

Don't miss