In this Help Net Security interview, Matt Holland, CEO of Field Effect, discusses achieving a balance for businesses between the advantages of using AI in their cybersecurity strategies and the risks posed by AI-enhanced cyber threats.
Holland also explores how education, awareness, and implemented measures prepare organizations for these evolving challenges. Furthermore, he underscores that relying solely on AI-driven solutions without human expertise leads to disaster.
There’s a lot of buzz around AI supercharging cyberattacks. What are the real, tangible threats that AI, including large language models, pose in cyberattacks?
There’s a lot of hype around AI and LLMs with regards to what they’ll enable threat actors to do. These tools aren’t going to suddenly give the bad guys a way to build exploit chains they can then package up as products they can sell to other hackers, nor will they let them create malware that can magically evade all known detection techniques. That’s not to say there’s no reason to be concerned, though—AI and LLMs could be used to create even more sophisticated social engineering campaigns—think deepfakes, audio messages, recordings, and even well-crafted emails that would be much harder to discern from the real thing.
How are cybersecurity companies responding to these AI threats, and what AI-centric products are they developing? Are there any that you believe are particularly effective or innovative?
The advantages AI offers to an attacker are just as available to a defender. You’ve got tools that can help automate detection at scale, something that human analysis alone isn’t particularly well-suited towards. Cybersecurity companies are already putting these tools to use to spot patterns and anomalies that could otherwise slip by human detection.
Furthermore, AI gives these companies a way to distill highly technical alert information into something far more digestible to the average IT worker who may not have a ton of security expertise but is still tasked with managing a solution. On the other hand, though, some cybersecurity firms are going to have to up their detection game—AI tools that can draft convincing phishing messages mean that you can’t rely on typos alone to spot an attempt.
How can businesses balance the advantages of using AI in their cybersecurity strategies with the risks posed by AI-enhanced cyber threats?
Whether or not you choose to use AI, it’s safe to assume hackers will be. That said, it’s not like this is some showdown from an old western movie; even if the bad guys are using AI, you’ve still got a well-defended EDR bunker you can hide in. You don’t need to meet them for a duel at high noon.
When it comes to using AI as part of your cybersecurity strategy, you’ve got to consider the risks—data governance is a big consideration, as is the legal risk of using generative AI output. AI tools need additional consideration around the training data they’re built on, their overall security, and how they approach intellectual property and sensitive data.
For SMEs looking to protect themselves from cyberattacks, what feasible and realistic measures can they implement, especially considering their often-limited resources?
There are two things that can make a big difference. One, implement essential cybersecurity controls—these are fundamental to proactive, effective defense. Two, don’t try to build everything in-house—it’s far too costly. Instead, look for a trusted partner that can help manage your protection, and invest in a holistic solution that can evaluate your cyber risk and proactively detect security events across your entire IT environment—including endpoints, networks, and any cloud or SaaS infrastructure you rely on.
What role do education and awareness play in equipping professionals with AI-related cyber threats? How can companies better prepare their employees for these evolving challenges?
When it comes to new and emerging technologies like AI and LLMs, education and awareness are critical. The impact AI will have on numerous fields is still uncertain, and so are the associated risks. It’s important that companies establish clear policies around AI use, and that they continuously review the tools they employ that are leveraging AI—and make sure that employees understand the policies and why they’re in place.
How do you see AI evolving in the cybersecurity industry? What potential benefits and challenges do you foresee?
Attackers are going to continue using AI, which will help them scale their efforts and create more convincing scams. It’s unavoidable that the cybersecurity industry will have to adopt these technologies to some extent in response.
The immediate benefit to defenders here is that AI has the potential to provide a major helping hand in threat detection—AI can handle way more data than a human ever could, after all. But any AI-driven solution used in isolation from human expertise is a recipe for disaster. AI still makes assumptions and leaps of logic that don’t quite add up, and as such, human expertise and oversight is still needed to guide any cybersecurity program.