Deepfake Technology To Impersonate as LastPass CEO


A LastPass employee recently became the target of an attempted fraud involving sophisticated audio deepfake technology.

This incident underscores the urgent need for heightened cybersecurity awareness and the implementation of robust verification processes within organizations.

The Rise of Deepfake Technology

Deepfake technology, which employs generative artificial intelligence to create hyper-realistic audio or visual content, has been a growing concern among cybersecurity experts for several years.

Initially associated with political misinformation campaigns, the technology’s potential for harm has expanded into the private sector, with fraudsters leveraging it for elaborate impersonation schemes.

The technology’s accessibility has dramatically increased, with numerous websites and applications enabling virtually anyone to craft convincing deepfakes.

Historically, deep fakes have been used in high-profile fraud cases, such as a 2019 incident where a UK company’s employee was tricked into transferring funds to a fraudster impersonating the CEO through voice-generating AI.

Document

Stop Advanced Phishing Attack With AI

Trustifi’s Advanced threat protection prevents the widest spectrum of sophisticated attacks before they reach a user’s mailbox. Stopping 99% of phishing attacks missed by
other email security solutions. .

More recently, a finance worker at a Hong Kong-based multinational was deceived into sending $25 million to perpetrators using video deepfake technology to impersonate key company officials during a video call.

The LastPass Incident: A Close Call

The recent attempt on a LastPass employee represents a significant escalation in using deepfake technology for corporate fraud.

The employee received multiple calls, texts, and at least one voicemail via WhatsApp, all featuring an audio deepfake of the company’s CEO.

The fraudulent communication was immediately suspicious to the employee due to its occurrence outside normal business channels and the presence of social engineering red flags, such as undue urgency.

Screen capture displaying the WhatsApp attempted contact using deepfake audio as part of a CEO impersonation

Screen capture displaying the WhatsApp attempted contact using deepfake audio as part of a CEO impersonation.

Fortunately, the LastPass employee did not engage with the fraudulent messages and promptly reported the incident to the company’s internal security team.

This swift action allowed LastPass to mitigate any potential threat and use the incident as a case study to enhance awareness of deepfake technology’s dangers within the company and the broader cybersecurity community.

The incident serves as a critical reminder of the importance of verifying the identity of individuals claiming affiliation with a company, especially when contacted through unconventional channels.

LastPass’s proactive approach in sharing details of the attempted fraud aims to encourage other organizations to remain vigilant and educate their employees about cybercriminals’ evolving tactics.

In response to the growing threat posed by deepfake technology, LastPass is collaborating with intelligence-sharing partners and other cybersecurity entities to share knowledge about such tactics.

This collective effort is crucial for staying ahead of fraudsters and safeguarding the integrity of corporate communications and transactions.

The attempted deepfake call targeting a LastPass employee is a stark illustration of the sophisticated methods employed by cybercriminals in the digital age.

It highlights the imperative for continuous education, vigilance, and developing secure verification protocols to protect against the ever-evolving threats posed by malicious actors in the cyber realm.

Secure your emails in a heartbeat! To find your ideal email security vendor, Take a Free 30-Second Assessment.



Source link