The one-time identity check that has anchored digital commerce for decades is cracking under the pressure of fraud driven by AI. Entrust’s 2026 Identity Fraud Report reveals that 82% of payment fraud attempts now occur after onboarding. Rather than breaching defences with forged documents, criminals increasingly buy legitimately verified accounts or seize them through sophisticated account takeovers, which renders traditional checkpoint-based security obsolete.
“Although fraud prevention systems are stronger than ever, people remain one of the most vulnerable links in the chain. Indicators from 2025 reveal that social engineering and coercion pose an increasing threat across the customer lifecycle. Unlike technical fraud, these attacks manipulate victims into using their own genuine identity credentials. Fraudsters exploit human trust in ways that technology alone struggles to block,” says Vincent Guillevic, head of the fraud lab at Entrust.
The economics underscore how fundamentally the threat has evolved. “Verified accounts have become more valuable than forged documents…as they inherit all the trust signals of successful know-your-customer (KYC) processes,” says Penny Chai, vice president for Asia Pacific at Sumsub, which analysed more than four million fraud attempts worldwide for its Identity Fraud Report 2025-2026.
To worsen things, one in four individuals in the Asia Pacific was targeted for money mule recruitment this year, one of the highest rates globally, as criminal networks industrialise the exploitation of legitimate identities. Fraudsters purchase access to legitimate accounts for between US$40 ($51) and US$100.
AI industrialises fraud at scale
One of the clearest signs of escalation is the surge in deepfake activity across the region. Sumsub’s data shows Malaysia recorded a 408% year-on-year increase, while Singapore saw deepfakes jump 158%.
See also: Coupang CEO resigns over historic South Korean data breach
Entrust reports the same trend, wherein deepfakes accounted for one in five biometric fraud attempts in 2025. “These attacks pose a significant threat during both new account creation and authentication, as well as for social engineering attacks or investment scams. A defining theme of deepfakes is how easy it is becoming to create them due to readily available tools online,” says Guillevic.
The threat is also becoming more industrialised. Jumio has begun detecting “deepfake factories” or operations that generate multiple selfies of different faces against the same background. This exploits systems that assess facial attributes but overlook environmental consistency, says Jack Ang, the company’s senior director of account management for Asia Pacific.
Attackers are also shifting the battle to video. Camera injection attacks — in which a live stream is covertly swapped for pre-recorded or AI-generated footage — show how deeply fraudsters are probing liveness detection systems. “Instead of holding a printed photo to the camera, attackers now hijack the video feed itself. They’re probing for any weakness in liveness detection, and they’re very good at it,” says Ang.
See also: StarHub, NeutraDC push quantum-safe links to bolster Southeast Asia’s data flows
The increasing availability of consumer-facing AI video generators may further drive the rise of deepfake. Tools like Google’s Veo 3 and OpenAI’s Sora 2 can create hyper-realistic videos with synchronised physical motion and audio from simple text prompts. This makes it easier for non-technical bad actors to produce convincing fake identities and video content, turning deepfake into a mass market problem.
The continuous verification imperative
Fraud is no longer confined to onboarding. Jumio’s Ang shares that fraudsters are building or buying “clean accounts” that pass all checks, only to activate them later as “sleeper agents.” Others use AI to generate flawless synthetic personas or subtly poison application data in ways that legacy systems trained on real data and obvious forgeries simply cannot detect.
Meanwhile, Sumsub’s research shows a surge in telemetry tampering, wherein attackers manipulate software development kits, application programming interfaces and device signals. There is also a rise in AI fraud agents, or autonomous systems that use generative AI, automation and behavioural mimicry to run entire verification attempts end-to-end. “Fraudsters now combine multiple coordinated techniques such as synthetic identities, layered social engineering, device or telemetry tampering, and cross-channel manipulation,” Chai says, noting such sophisticated attacks rose 180% year-on-year.
Combatting this requires a shift to continuous trust models. Verification firms recommend multiple detection layers, including biometric verification with passive and active liveness checks that introduce randomness and prevent attackers from reusing static or pre-recorded content. These systems weave biometric proof together with device fingerprinting, network intelligence and behavioural analytics. Even if a camera-injection attack fools one layer, another can detect a tampered device or anomalous network signal.
“We need to shift from point-in-time verification to continuous risk assessment. [Fraud systems today must not] just look at a single transaction. [They should] analyse subtle correlations across the entire customer lifecycle and our cross-industry network [so that unusual patterns can be identified early],” says Jumio’s Ang.
Entrust’s Guillevic agrees. He adds: “Taking a layered approach to protect the full customer lifecycle enables organisations to prevent fraud before it starts and mitigate risks as they emerge. Our study shows that organisations that implement robust identity verification save an average of US$8 million annually in fraud-related costs.”
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Building cross-border trust
Asia Pacific’s digital landscape multiplies risks rather than containing them because fraud does not respect national borders. Super-apps, real-time payments, cross-border wallets and regional mobility mean synthetic identities or compromised accounts created in one market can be exploited in another within minutes.
Fragmented threat intelligence worsens the exposure. “Today, when a bank in Singapore stops a deepfake attack, that fraudster’s digital fingerprint often remains isolated. This siloed defence is exactly what orchestrated fraud rings exploit,” Ang says.
In response, verification platforms are positioning themselves as the connective tissue for a more coordinated trust infrastructure. “By processing millions of verification checks daily and analysing large volumes of fraud data from a wide range of industries and regions, our platform can detect unusual patterns that single-point KYC checks would miss. This means that when an account takeover occurs, or a verified user is recruited into a fraud network, anomalies in behaviour or coordinated activities are identified early,” says Sumsub’s Chai.
Yet, stronger verification cannot come at the expense of user experience. Organisations need security that adapts invisibly in the background, not systems that introduce friction for legitimate customers. For example, Jumio’s identity-reuse system allows returning users already verified by a major Singapore bank to sign up for a new app with only a selfie, avoiding repeated document uploads and reducing abandonment. “It’s faster, safer and creates a network effect of trust that turns verification from a compliance requirement into a competitive advantage,” says Ang.
Organisations must also be able to keep up with fast-emerging identity threats. They can leverage Entrust’s identity verification Studio platform to design and manage verification journeys using an intuitive drag-and-drop flowchart style interface. This eliminates the need for complex coding and enables rapid adjustments to new attack vectors and compliance requirements, ensuring businesses can adapt quickly without operational disruption, notes Guillevic.
According to Jumio’s research, 74% of Singapore residents say AI-powered fraud is more frightening than conventional identity theft. As AI continues to reshape both the scale and sophistication of digital identity-related attacks, the region’s ability to build seamless, adaptive and continuous trust frameworks will determine how securely its digital economy can grow.
