The growth of AI-based technology has introduced new challenges, making remote identity verification systems more vulnerable to attacks, according to iProov.
Innovative and easily accessible tools have allowed threat actors to become more sophisticated overnight, powering an increasing number of threat vectors due to new methodologies.
While much attention has focused on consumer identity fraud, the most significant and costly attacks of 2024 targeted workforce remote identity verification systems. This shift toward corporate targets reveals a concerning trend: threat actors are exploiting remote work processes and corporate communication channels for maximum impact.
By targeting remote hiring processes, virtual workplace communications, and executive video conferences, attackers are achieving significantly higher payouts than traditional consumer fraud. This shift from individual to organizational targets exposes a dangerous gap in workforce identity verification—one that current corporate security frameworks are struggling to address.
Key remote identity verification systems attack trends
Native camera attacks evolved from their experimental phase in 2023 to become a major threat in 2024, increasing by 2665% due partly to mainstream app store infiltration. Most concerningly, these attacks don’t require rooted or jailbroken devices, making them accessible to threat actors without advanced technical skills.
Face swap attacks surged 300% compared to 2023, with threat actors shifting focus to systems using liveness detection protocols. Threat actors leverage shared intelligence to exploit vulnerable systems using a variety of face swap tools.
An additional 31 online threat actor groups were identified in 2024, the largest of which has 6,400 users. The online crime-as-a-service ecosystem grew, with nearly 24,000 users now selling attack technologies. Image-to-video conversion emerged as a new synthetic identity attack vector with a simple, two-step process that could impact many liveness detection solutions already in the market.
Simple, lone-wolf attacks have evolved into a complex, multi-actor marketplace. iProov’s report underscores a move towards long-term fraud strategies, with threat actors embedding stolen, bought, and synthetically derived identities into the fabric of everyday online identity access points.
Some of the most insidious attacks use sleeper tactics: code that remains dormant for extended periods of time, quietly prepared to wreak havoc on networks. In contrast, other criminals are replicating attacks faster than ever, launching parallel operations across different sectors and expanding their reach into remote work systems and corporate communications.
“The commoditization and commercialization of deepfakes, for instance, pose a significant threat to organizations and individuals,” said Dr. Andrew Newell, Chief Scientific Officer at iProov. “What was once the domain of high-skilled actors has been transformed by an accessible marketplace of tools and services that low-skilled actors can now use with minimal technical expertise for maximum results.”
The scale of attacks against remote identity verification is vast, with iProov identifying exponential growth analyzed across multiple vectors and an increased focus on high-value corporate targets. Among the findings, the report cites that over 115,000 potential attack combinations are possible.
The growing danger of synthetic identity fraud
Synthetic identity fraud (SIF) is the fastest-growing type of fraud, with particularly alarming implications. This sophisticated scheme combines legitimate data
(such as valid Social Security numbers often stolen from children, elderly, or deceased individuals) with fabricated personal information to create convincing
false identities.
What makes SIF especially challenging to combat is its ability to evade traditional fraud detection systems. Unlike conventional identity theft, where systems can flag stolen information based on reports from real victims, SIF creates entirely new identities that incorporate both real and fake elements
“As the rapid proliferation of offensive tools continues to accelerate, security measures are struggling to keep up,” said Dr. Newell. “We are moving to a world where the authenticity of digital media is becoming impossible to establish by the human eye, making this a problem not just for traditional targets but for any organization or individual that relies upon the authenticity of digital media to establish trust.”
Static, point-in-time security measures, a collective false sense of security, and human error, exemplified by the fact that just 0.1% of participants in a recent iProov study could reliably distinguish real from fake content, underscores the limitations of current defenses. The report further emphasizes that standard detection and containment protocols are not evolving as quickly as the threats, leaving organizations vulnerable for extended periods.
“Relying on outdated security measures is like leaving the front door open to fraudsters,” said Dr. Newell. “Success requires continuous monitoring, rapid adaptation capabilities, and the ability to detect and respond to novel attack patterns before they can be widely exploited.”
Fraud against individuals is significant, and for organizations, it can lead to severe financial losses. According to the Federal Trade Commission’s Consumer Sentinel Network, over $10 billion was lost to identity theft in 2023, with notable settlement costs for organizations exceeding $350 million per breach.
The future of fast, efficient, and proven identity verification lies not in a single technology or approach but in a multi-layered, dynamic strategy.