Organizations can now buy cyber insurance that covers deepfakes

Organizations can now buy cyber insurance that covers deepfakes

Synthetic media, including AI-generated deepfake audio and video, has been increasingly leveraged by criminals, scammers and spies to deceive individuals and businesses.

Sometimes they do so by imitating an employee’s CEO, urging them to transfer large sums of money or provide them access to work accounts. Other times this fake media is created by a competitor or bad actor to ruin the reputation of executives or their companies.

Now cybersecurity insurance provider Coalition is offering coverage to organizations for deepfake-related incidents. On Tuesday, the company announced its cybersecurity insurance policies will now cover certain deepfake incidents, including ones that lead to reputational harm. The coverage will also include response services such as forensic analysis, legal support for takedown and removal for deepfakes online and crisis communications assistance.

In an interview, Shelly Ma, incident response lead at Coalition, told CyberScoop that deepfakes still represent a small fraction of the claims the company processes, and that 98% of their claims don’t involve any advanced use of AI.

This is largely because “the low hanging fruits still very much work” for malicious hackers, with exploited VPNs, unpatched software and phishing still largely effective for those attempting to  gain access to targeted organizations. Even in impersonation scams, attackers tend to rely on lower tech tactics like spoofing phone numbers.

Ma said that deepfake-enabled breaches they have seen tend to be from sophisticated threat actors that can bring the necessary technical expertise to deploy them in credible and believable ways.

“In the handful of cases where we have spotted deepfakes, we’ve seen attackers mostly use AI-generated voice or text to impersonate trusted contacts,” said Ma. “So typically, it would be a CEO or finance executive to authorize fraudulent payments or share credentials, and these are highly targeted and designed to blend into an existing workflow, which makes them quite dangerous even when they’re not yet that common.”

While traditional phishing relies on persuading victims through convincing text, deepfake video and audio adds “a whole new dimension of sensor authenticity” that make this type of attack more effective. Malicious parties can also generate dozens of tailored voice or text impersonations “in minutes,” something she said used to take days of reconnaissance and manual effort to pull off before LLM automation.

“These attacks, they shortcut skepticism, and they can bypass even very well-trained employees,” Ma said.  

These successful campaigns still require a lot of work, and for now, small and medium-sized businesses may not be attractive enough targets to justify using AI-enabled attacks. However, Ma estimated that as AI technology becomes more advanced, affordable and accessible, these organizations are likely just 12 to 24 months away from seeing AI regularly used in fraud and business email compromise scams.

That aligns with what some other groups are saying. This week, identity provider ID.me said the use of AI and deepfake technology to bypass verification tools has made the barrier to entry for cybercriminals “alarmingly low.” ID.me’s identity systems across government and the private sector have seen AI attacks, including deepfakes, “with increasing frequency.”

“While sophisticated hackers and entities have the skills and abilities to create bespoke exploits and bypass methods, these tools open the door to fraud to a much wider range of criminals,” the company said in a new fraud report released this week.

In September, ID.me announced it had raised $340 million in Series E funding, which they said would go partly toward accelerating technologies that prevent and combat AI-driven fraud.

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.



Source link