Deepfakes at the Gate: How Fake Job Applicants Are Becoming a Serious Cyber Threat

Deepfakes at the Gate: How Fake Job Applicants Are Becoming a Serious Cyber Threat

In recent months, the hiring process has become a new attack surface. Cybercriminals are no longer just spoofing emails or exploiting software flaws—they’re applying for jobs.

In the last year, U.S. organizations hiring for remote roles—especially in IT—have been hit by a wave of highly convincing fake job applicants. These aren’t your typical résumé fibbers. Powered by generative AI, these imposters are using fake headshots, AI-crafted LinkedIn profiles, deepfake video interviews, cloned voices and fabricated employment histories to secure employment and real access to internal systems. Once they’re in, these impostors can infiltrate organizations and potentially cause serious damage – by stealing data or installing malware, for example.

According to Gartner, by 2028 one in four job applicants could be fake. It’s not just fraud; it’s infiltration.

Unfortunately, this upward trend of “Corporate Espionage 2.0” is gaining significant momentum. In fact, the Justice Department has even uncovered multiple cases of North Koreans using fake identities to secure remote IT jobs in the U.S. and then funnel U.S. dollars to their home country.

The Soft Underbelly of Digital Onboarding

Companies first embraced digital channels and processes for recruitment and onboarding during the pandemic. What they quickly learned is, pandemic or no pandemic, handling these processes digitally could save significant time, money and resources, while extending opportunities to workers who were no longer bound by geographical red tape.

At first, fake applicants’ tools and techniques weren’t very sophisticated. These applicants could steal someone else’s personally identifiable information (PII) and use it to progress through pre-employment background checks and screening processes. They could refuse to be on camera for interviews and limit communication to messaging apps. Some international fraudsters even found willing U.S. citizens to lend their identities and bank accounts, so they could pose as them and enjoy the financial benefits of being paid in U.S. dollars.

However, with digital recruitment and onboarding now the norm, the advent of AI-driven deepfakes means hiring managers now face the risk of encountering highly credible and scarily convincing fake candidates. Deepfakes can be used to create fake faces and clone voices, and generative AI is also being used to generate false employment histories and answers during interviews. Not only has the technology become much more sophisticated, it has also become much more accessible. The barrier to entry for bad actors is lower now than before, giving even those with little to no technical know-how the chance to engineer sophisticated, AI-fueled attacks.

Fighting AI with AI, and Other Ways to Enhance Your Arsenal

Fortunately, organizations now have an opportunity to “fight AI with AI” by deploying AI-based identity verification processes. More specifically, facial recognition software – consisting of AI algorithms trained to match faces as well as verify the integrity of documents – can be deployed to ensure a person presenting is in fact one and the same as the person represented in accompanying legitimate documentation. This is one of the most sophisticated, foolproof techniques available to ensure that a person is who they say they are, and it’s closing the gap in the deepfake creation and detection arms race.

AI-based identity verification can also cross-check the person against government databases to ensure they do not have a criminal history. Finally, going a step further, AI-based liveness detection can ensure the person presenting is a real, live person (not a spoof, or a deepfake photo or video created from content easily pulled off of social media). Passive liveness detection works in the background, assessing a user’s presence or authenticity without requiring any active user interaction. In the realm of facial recognition, passive methods use subtle cues like skin texture, light reflection, and facial micro-expressions to distinguish between a real person and a spoof.

Of course, there are additional, tried and true non-technical techniques that are always good practice when recruiting and onboarding remotely. Organizations should always insist that cameras are on during interviews or other assessments. And they should watch for obvious red flags, such as candidates looking for remote jobs only and who refuse to travel; missing social media profiles; and past companies being unavailable for reference checks.

KYE is the Way of the Hiring Future – And AI is a Critical Underpinning

Know-Your-Customer, or KYC, is a principle that’s been around a long time, particularly in financial services. It’s the process of verifying the identity of a customer and understanding the nature of their business activities in order to assess and mitigate potential risks like money laundering and other crimes. AI-enabled identity verification techniques like facial biometrics are heavily entrenched in this space.

Piggybacking on that term, knowing for certain who your employees are (a term coined “KYE”) is equally important. It does more than stop fraudulent job applications – it also helps ensure compliance with local labor laws and makes global onboarding more efficient overall. As remote work continues to grow, particularly in sensitive areas like IT, organizations of all types will need to increasingly rely on rigorous KYE procedures. Such advanced methods are the most effective way to prevent fraudulent job seekers from gaining a dangerous foothold in their highly sensitive operations and data.

The Bottom Line

Cybercriminals are no longer just breaking in—they’re applying and getting hired. In a world where AI can generate a job candidate in minutes, digital hiring must be treated as a frontline security concern. If your onboarding system doesn’t verify identity with the same rigor as your login screens, you may be hiring your next breach.

About the Author

Iryna Bondar is the Senior Fraud Group Manager at Veriff

She is the senior fraud group manager on the fraud operations team at Veriff. She puts her analytical, problem-solving, and communication skills to use, leading the Fraud team with a mission to stay one step ahead of fraudsters in the ever-evolving cybersecurity landscape. Iryna can be reached online at LinkedIn and at our company website https://www.veriff.com/.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.