Code Defence Cyber security

Report: Deepfakes Now Drive 20% of All Biometric Fraud Attempts

A new 2026 Identity Fraud Report by Entrust reveals that AI-driven deepfakes are now responsible for one in five (20%) biometric fraud attempts globally. The report also notes a spike in coordinated cybercrime attacks occurring specifically between 2:00 AM and 4:00 AM UTC to exploit timezone gaps in defense.

Business Impact

Trust in standard biometric verification (like selfie checks for onboarding) is eroding. Financial institutions and enterprises relying solely on facial recognition for identity verification face a surging risk of account takeovers and synthetic identity fraud driven by cheap, accessible AI tools.

Why It Happened

Generative AI has lowered the barrier to entry for creating realistic “injection attacks,” where attackers feed a deepfake video stream directly into an authentication system, bypassing the physical camera to fool liveness checks.

Recommended Executive Action

Upgrade identity verification systems to include “active liveness” detection (asking the user to perform random actions) and consider multimodal biometrics (voice + face). Ensure security teams have 24/7 coverage or automated response capabilities to handle off-hour attack spikes.

Hashtags: #AI #Deepfake #Biometrics #Fraud #IdentityTheft #CyberSecurity #Fintech #InfoSec

Scroll to Top

Review My Order

0

Subtotal