How to Stay Ahead of Deepfakes and Other Social Engineering Attacks


As companies work to reap the benefits of artificial intelligence (AI), they also must beware of its nefarious potential. Amid an AI-driven uptick in social engineering attacks, deepfakes have emerged as a new and convincing threat vector. Earlier this year, an international company lost $25 million after a financial employee fell for a deepfake video call impersonating the company’s CFO. While such a story may sound like an anomaly, the reality is that generative AI is creating more data than ever before—data that bad actors can use to make attacks more convincing. Additionally, the technology has supported the multiplication of such attacks, growing from single attacks to quickly become tens of thousands of attacks, each tailored to the target in question. 

As deepfakes and other AI-generated social engineering attacks continue to become more common and convincing, companies must evolve beyond traditional threat intelligence. To remain secure, they must leverage AI themselves, embrace segmentation, and educate their employees on an ongoing basis.

Fighting fire with fire

Deepfakes are an extremely sophisticated way for bad actors to get through the door. Instead of receiving an oddly worded email from an alleged Nigerian prince, AI can help bad actors send highly personalized and convincing emails that mask the usual red flags. Once they have access to the network, they can start exporting, collecting and sharing data that can be used to build a convincing attack for their target. Thus, companies need tools that can identify a normal baseline for every user’s schedule and behavior. Then, AI can be leveraged to quickly identify and remediate anomalies that arise, like someone logging in at weird hours or stockpiling large amounts of information. By employing AI to detect suspicious activity, companies can sift through tremendous amounts of noise to uncover red flags.

Embrace segmentation 

The impact of deepfakes and other social engineering attacks can be minimized by dramatically shrinking the attack surface through segmentation. Government agencies with extremely sensitive data have always had several rings of protection: unclassified, classified, and top-secret networks. This is a mindset all companies must embrace. Having everything on a single network is extremely risky, even if that network uses zero-trust principles. 

In fact, the recent Crowdstrike outage completely debilitated airlines because they have everything on a single network, which creates a single point of failure. In addition to separating crown jewel data from less critical data, it can also be useful to rely on different applications, such as using Microsoft Teams for standard messaging and a dedicated chat capability for more sensitive conversations. Segmenting networks, communication styles, and data enclaves ensures that, even if a bad actor gets through the door using a deepfake, they won’t have complete and total access to sensitive information.

Educate employees

In an ideal situation, segmentation and anomaly detection aren’t required because bad actors never get in at all, which is why educating employees on the rise of deepfakes may be the most effective way to ensure company-wide security. Zero-trust is a mindset—not just a technology or a protocol—and teaching employees to be extremely diligent can go a long way. If there’s even a small chance that a request is nefarious, employees should be encouraged to verify it outside of the channel the request came in on. That may mean picking up the phone and simply calling the individual in question. Additionally, teaching about the capabilities that exist and reminding employees to think before they click are simple but effective ways to prevent deepfakes. 

Altogether, the technology available to bad actors is going to continue to evolve, but companies can keep up with the pace of change by deploying AI themselves, embracing segmentation, and educating their employees about the threats that exist. Without these steps, organizations will remain vulnerable to deepfakes and other social engineering attacks, which leaves their data and reputations at risk.

 

Ad



Source link