Meet the deepfake fraudster who applied to work at a deepfake specialist

Meet the deepfake fraudster who applied to work at a deepfake specialist

Last year, security company KnowBe4 helped spark a wave of interest in fraudulent workers when it revealed extensive details of how it uncovered a rogue North Korean operative who had been hired by the company.

The rogue North Korean IT worker scam has piqued the interest of security experts and human resources (HR) professionals across the UK, the US and around the world. And it seems to be spreading, at least according to Sandra Joyce, vice-president of Google Threat Intelligence, who recently warned that the scam was going global.

“It’s not just about the US anymore. We’re seeing it expand to Europe and, in some cases, we’re seeing some real abuses,” she said, speaking to reporters on the fringes of Google Cloud Next back in April 2025. “We saw one individual who was running nine different personas and providing references for each of the personas for the others.”

Along with this expansion in scope is coming an expansion in targeting, with the fraudulent North Korean workers even observed conducting extortion operations in addition to drawing down their salaries to help boost the isolated regime’s coffers – which is usually their most basic objective.

But before the North Koreans, or whoever else may be seeking to defraud a company in this way, can begin to do so, they must first get hired. To aid in this, fraudsters and other threat actors are now turning to generative artificial intelligence (GenAI), using large language models (LLMs) and deepfake videos to create plausible candidates who can easily slip through a recruiter’s net.

Meet Pindrop’s deepfake candidate

In many cases they are successful, or almost successful, as Pindrop, a supplier of voice security and fraud detection solutions, discovered when its recruiters found themselves face-to-face with a deepfake candidate who “applied” not just once, but twice.

According to Pindrop, for one job posting alone the firm received more than 800 applications in a matter of days and, when it applied deeper analysis to 300 of the candidate profiles, it found that over 100 of those were entirely fabricated identities, many using AI-generated resumes, manipulated credentials and even deepfake technology to simulate live interviews.

The Pindrop team put its Pindrop Pulse deepfake detection tech to use in an interview with an “individual” to “whom” it has since given the pseudonym Ivan X. Ivan applied for a job with Pindrop that, at first glance, he seemed like a great fit for.

However, during Ivan’s first interview, the Pindrop Pulse software identified three red flags that enabled the team to tell immediately they were in danger of hiring a deepfake candidate.

First, Ivan’s facial movements seemed unnatural and slightly out of synch with the words he was saying, likely indicating the video had been manipulated. Secondly, the interview was dogged by audio-visual lag, and Ivan’s voice occasionally dropped out or did not align with his lip movements. Finally, when the interviewer asked an unexpected technical question, Pindrop Pulse identified an “unnatural” pause, as if the system was processing a response before playing it back.

Vijay Balasubramaniyan, Pindrop CEO, says: “When this happened, the crazy thing is the recruiter was psyched because she got an alert that she was talking to a deepfake and, the deepfake candidate obviously didn’t know it, but the position ‘he’ was applying for was not just a software engineer, it was a software engineer in the deepfake detection team, which is just super meta.”

Seconds out…round two

Pindrop had had a lucky escape. However, eight days later, Ivan resurfaced with a new application that arrived through a different recruiter. Curiosity aroused, the team decided to let him get through to the interview stage.

The second time round, it was immediately obvious that the candidate joining the interview was visually a completely different person but with the same identity and credentials as the first. Within minutes, Ivan X 2.0 encountered connection issues, dropped the call and rejoined, likely an attempt to recalibrate the deepfake software. When the interview was finally able to proceed, the same issues as before popped up, although the deepfake itself seemed to have been improved slightly.

This validation backed up the team’s suspicion that it was not dealing with an isolated incident but rather a deliberate and coordinated attack on the Pindrop hiring process using deepfake tech.

Balasubramaniyan says he has since tasked many of his hiring team to interviewing deepfake candidates on the side, and he is genuinely enthusiastic about testing the company’s rapidly developing deepfake detection technology out on them.

“The cool thing about Pindrop is we pull on a thread and we go deep – that’s how our products got created – so we’ve gone deep down this rabbit hole and we’re now seeing clearly documented proxy relays from North Korea. And we’ve interviewed all of them – we’re now setting up honeypots to interview them,” he says.

We are not prepared for what’s coming

Pindrop’s experience makes for a funny story, but according to Matt Moynahan, CEO of GetReal Security, another startup making waves in the expanding field of deepfake detection, it’s deadly serious. He is incredibly worried about what’s coming and tells Computer Weekly that we have no idea how bad this problem might get.

“The history of security is all about impersonation and always has been,” he says. “That’s been going on forever. But what’s happening now is you’ve got these incredibly sophisticated capabilities.

“When you think about this world with GenAI where I can steal not just your credentials but your name, image and likeness, what’s the difference between somebody who you think you know and see every day and turns against you, versus an adversary whose got your credentials and your name, image and likeness on the Zoom call that you think is real and you trust, and they turn against you? It’s almost worse.

“So, when you think about this notion of trickery and impersonation, it’s going to be out of control. I don’t know where this thing stops. It’s a complete mess,” he says. “And it’s not just North Koreans – they’re the ones who have been caught.”

Balasubramaniyan adds: “Fraud is a percentage-driven game, and even the best fraud campaigns run at a 0.1% rate of success. One in a thousand work. But the point is when they work, they work big. Sometimes you win the jackpot – certainly enough for someone in a developing country to make a very nice living out of this. And that’s the point – what deepfake AI technology allows these fraudsters to do is scale the operation. We have seen a lot of candidate fraud, and I don’t personally think it’s because we’re special, I think it’s because we’re looking.”

On the basis that Pindrop is seeing so many attempts itself, Balasubramaniyan reckons most organisations are being hit by deepfake candidates already. In the case of large enterprises with wide hiring remits, this is likely to be happening multiple times a day.

According to Gartner predictions, one in four candidate profiles worldwide will be fake by 2028, and according to the US Bureau of Labor Statistics, with an average hiring rate across the US of five million people every month in 2024, on the assumption of three to six interviews for each hire, American HR pros will face between 45 and 90 million deepfake candidate profiles this year alone.

This hyperscaling presents an unprecedented risk to the business, says Balasubramaniyan, who likens hiring a deepfake candidate to inviting a vampire to enter your home in a horror movie. “You’re basically done for,” he says.

Compounding the problem, says Moynahan, is the fact that attacking HR is a win-win for fraudsters because such departments not only hold the keys to the castle gates but are staffed by people for whom the whole point is to be open and receptive to new approaches. It hasn’t helped matters, he adds, that the Covid-19 pandemic turned so much of the hiring process virtual.

“It’s so easy to hire in a remote environment where you may never see anybody,” says Moynahan. “And some of these attacks are so brazen. I was in one company where there was an African-American candidate and then there was an Asian person who was dropped in as the actual hire – they didn’t catch it because the company was so big. The whole cyber market has been focused on the back door, and the front door is just as easy. It might even be easier.”

What can we do about it?

We must acknowledge that progress and invention can’t be stopped or rolled back – or, to put it in the most basic of security terms, nobody would argue for the uninvention of the lock simply because it is possible to pick them.

Having spent years working the more familiar rogue insider angle – à la Edward Snowden – Moynahan argues that the security industry needs to reinvent the concept of the insider threat bracket. It has always been hard to address the threat of trusted people going rogue by using cyber technology because it’s not really a technology problem. In addition, says Moynahan, it hasn’t been taken particularly seriously because the implied lack of trust is antithetical to many business cultures.

“You hired Alex. A lot of people know Alex. They like Alex. But you can’t trust him. That’s a hard sell – humans are fragile,” he says,

“But that’s not what we’re talking about now we have GenAI. Now we can say you can’t trust GenAI because GenAI can replicate Alex in a heartbeat – it’s s a different conversation, it’s a threat to identity, and that’s why deepfakes and identity are two sides of the same coin.”

Balasubramaniyan believes some of the responsibility for the problem must lie with the developers of AI models. “They’re developing these things willy-nilly without any concern for safety,” he remarks, “and that has to change.”

However, he agrees with Moynahan that the wider security industry also needs to raise its game. “You need detection capabilities,” he says, “and I know that’s a biased answer, but I’ve been in security for so long that I’ve realised every time a new technology comes along, you’re going to have misuse of it. You just have to develop the counter-intelligence and the counter-technologies to prevent misuse.”

That’s all well and good, but for security leaders and decision-makers, the answer to the question of how they can protect their organisations from deepfake candidates is a rather tricky one.

So, what is a security leader to do? Balasubramaniyan’s advice to CISOs is to start actively seeking out deepfake candidates and, above all, ask the right questions.

“Truly look at what you’re seeing on your conferences. How confident are you that everybody on the call is really who they say they are? And dig into HR. CISOs care about employees, but they care about employees after they become employees. They now have to extend their purview to the top of the funnel,” he says.

Moynahan proposes a future model akin to the US’ existing Transport Security Administration (TSA) PreCheck service for “trusted” fliers, which allows them to skip some of the more onerous post-9/11 aspects of airport security, among other perks. To achieve PreCheck status, people must submit to a reasonably rigorous background check, including disclosing any criminal history, and prized PreCheck status can be withdrawn at any time, for any reason, by the authorities.

“That’s sort of what we already do [at GetReal],” says Moynahan. “[We] try to make sure that the entity showing up is who they say they are and nobody is being duped and faked. As a cyber security company with a digital forensics heartbeat, we’re going back into that data vulnerability.”

In this model, the question asked is not simply, ‘Is this person a deepfake?’, it becomes a wider set of questions that seek to establish why a particular individual was chosen to be spoofed, who else was on the same call, what they said and did, what other IT rights and privileges they had, and so on.

“That’s the challenge,” concludes Moynahan. “It’s not just real or fake. You’ve got to think about telemetry and cyber security and bring this to bear so that you don’t have adversaries infiltrating your digital communication systems and doing some serious harm.”


Source link