Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Mitigating the risks of AI face-swapping fraud in financial services

Chen Yong and Yao Taiping
Chen Yong and Yao Taiping • 5 min read
Mitigating the risks of AI face-swapping fraud in financial services
To effectively combat AI face-swapping fraud, financial institutions must adopt a multi-layered approach that integrates sophisticated AI tools, industry collaboration, and customer education. Photo: Shutterstock
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Artificial Intelligence (AI) has been a transformative force in the financial industry, driving efficiency and improving service quality. AI-powered facial recognition technology has made a significant impact, streaming processing from account opening and fund transfers to claims settlement, enhancing the security of financial services.

However, AI-generated content (AIGC) has also led to increasingly sophisticated fraudulent schemes, with AI face-swapping scams becoming a major concern. Once more prominent with entertainment and social media, deepfake technology has evolved to include fraudulent activities in areas such as identity verification, remote account openings, and secure transactions. The rise of AIGC extends beyond images to include videos, sound, and other media formats, amplifying the potential for misuse and creating complex challenges for detecting fraudulent activities.

The perils of AI-powered fraud for financial institutions

Southeast Asia’s financial institute sector is set to grow from US$11 billion to an estimated US$60 billion by 2025, driven by the rising middle class, increasing mobile uptake, and a young, tech-savvy population. Hence, it is more important than ever to safeguard against AI-driven fraudulent activities.

Today’s AI-powered fraud techniques, as they become more intricate, are making it harder for financial institutions (FIs) to differentiate between genuine and manipulated content. For example, deepfake-generated face-swaps, are designed to bypass even advanced facial recognition systems.

This puts the FIs in a vulnerable position to AI-powered fraud. While FIs are pressured to adopt cutting-edge digital tools to enhance convenience and user experience, they are also required to strike a balance between rapid innovation and rigorous validation of new technologies. However, the race to outpace fraudsters requires substantial investments in advanced AI detection systems, expertise, and time—resources which are not always readily available. On top of that, FIs need to ensure that the newly implemented security measures do not disrupt customer interactions or create friction in the delivery of service, particularly as intrusive authentication processes may risk alienating users.

See also: Empowering data centres to make the AI race sustainable

FIs tackling AI-powered fraud must balance security with seamless service delivery, ensuring that protective measures don’t hinder efficiency. This calls for innovative solutions that protect assets while fostering customer trust through a secure and efficient experience.

Using AI to combat AI

At the same time, AI, the same technology fueling face-swapping fraud, also holds the key to combat it. The same technology offers promising avenues for fortifying financial systems from advanced detection in facial recognition to the strengthening of security measures to defend against fraud:

See also: Biometrics and generative AI: Friend or foe of cybersecurity?

  • Multi-modal AI detection: By integrating image, video, text, and frequency domain data, multi-modal AI detection enhances accuracy in identifying fraudulent activities.

    Each modality targets specific forgery traces: image modality detects anomalies like texture inconsistencies or face-swap edges; video modality detects inconsistencies such as pixel jitter; frequency domain focuses on high-frequency details; and text modality provides semantic insights into forged regions and methods, which improves the model’s understanding of synthesis. This approach enhances the model’s ability to detect minute inconsistencies, even in manipulated media, making it more resistant to face-swapping attacks.
     
  • Anomaly detection algorithms based on real-life content learning: AI-enabled tools can monitor for unusual patterns in user behaviour and authentication processes by analysing differences between real and fake faces.

    Current forgery detection methods often fixate on specific forgery patterns, risking overfitting and poor adaptability to new patterns. To address the problem, the reconstruction-classification learning framework focuses on recreating real face images and can highlight larger errors in reconstructing attack data, allowing for more accurate detection.
     
  • AIGC-focused threat detection: This threat detection solution leverages advanced analytics to track and counter AI-driven attacks, such as face-swapping and synthetic media.

    For instance, researchers from a leading AI research institution in China have developed a new benchmark titled "Deep Forgery Traceability in Open Scenarios" (OW-DFA). This benchmark encompasses over 20 significant forgery techniques, including face substitution, emoticon driving, and attribute editing, to assess the effectiveness of various types of fabricated faces in open scenarios.
     
  • Financial-grade AI shields: These systems use multi-modal technology to enhance live identification, moving beyond traditional photo verification. They effectively address authentication issues in non-live scenarios (e.g. identity and avatar verification) by simulating human-like thinking.

    In bank risk management, after facial recognition during identity verification, the AI shield adjusts its interface and returns results. If discrepancies exist between the image, video, and stored biometric data, it can initiate further identity checks or manual reviews, helping banks prevent financial losses and mitigate risks.

    For example, Tencent’s eKYC (electronic Know Your Customer) solution employs advanced facial recognition algorithms, achieving an outstanding 99.80% accuracy on the Labelled Faces in the Wild (LLFW) dataset since 2017. It has consistently set industry benchmarks and is instrumental in preventing spoofing attempts involving images, videos, and static 3D models.

Combating the risks of AI face-swapping fraud

To effectively combat AI face-swapping fraud, FIs must adopt a multi-layered approach that integrates sophisticated AI tools, industry collaboration, and customer education.

One of the most critical steps involves investing in sophisticated AI tools designed to identify and analyse deepfake content. By integrating these advanced solutions into existing security systems, institutions can enable more precise identification of fraudulent activity.

Collaboration across the industry is vital. By working with AI technology providers, regulators, and industry peers, the cooperations allow the exchange of insights, the development of standardized solutions, and a united effort to address emerging threats.

Customer education also plays a pivotal role in mitigating face-swapping fraud. By raising awareness about the risks and encouraging customers to report suspicious activities, FIs can create an additional layer of defence. Proactive communication can empower users to recognise potential scams and act swiftly to protect their accounts.

Ultimately AI face-swapping fraud represents a two-sided face of AI – both a solution and a threat. Through strategic investments, continuous adaptation of security systems, and proactive customer education, FIs can safeguard themselves against evolving fraud risks. By staying ahead of technological advancements and fostering industry-wide collaboration, they can secure a future in an increasingly AI-driven world.

Chen Yong is a Tencent Senior Researcher; and Yao Taiping is a Tencent Senior Algorithm Researcher

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.