Are we securing AI like the rest of the cloud?
In this Help Net Security interview, Chris McGranahan, Director of Security Architecture & Engineering at Backblaze, discusses how AI is shaping both offensive and defensive cybersecurity tactics. He talks about how AI is changing the threat landscape, the complications it brings to penetration testing, and what companies can do to stay ahead of AI-driven attacks.
McGranahan also points out that human expertise remains essential, and we can’t depend on AI alone to protect cloud environments.
Are we seeing AI being used to automate lateral movement, vulnerability chaining, or privilege escalation? If so, how? How accessible are these AI-powered capabilities to lower-tier threat actors?
We are currently seeing AI being used in newer pentesting/redteam tools and services. At Backblaze, we are in the process of evaluating a vendor that provides an agentic AI service to supplement the standard manual pentesting we engage in and with.
One concern with using an AI pentesting tool is AI tends to be a black box. Because of this, it will be difficult to replicate the successful attack methods used by AI, and given the nature of AI, it likely won’t do it the same way again and might fail on a retry. This is why we’re talking to our partner about developing a transcript of the actions taken, to make sure we fully understand what was done and how best to mitigate.
AI-driven behavioral analysis, in this case ML and neural network systems, is making it easier to detect anomalies and preempt attacks before they escalate, with security platforms that analyze network behavior in real time and flag unusual access patterns or lateral movement.
AI has provided a low-barrier of entry for threat actors allowing them to be less knowledgeable but able to deploy and implement attacks with more speed and elasticity using cloud platforms. For example, cybercriminal forums feature dedicated sections for AI tools, such as FraudGPT and ChaosGPT, with malicious actors able to buy large language models at a relatively affordable price, making sophisticated attacks more accessible.
Are there cloud-specific misconfigurations or AI model deployment flaws that organizations routinely miss?
A common flaw is model drifting that occurs when AI models degrade over time as the real-world data they encounter shifts from the data they were trained on. This can lead to decreased accuracy not only in deployment, but also in threat detection techniques and requires continuous monitoring and retraining.
At the end of the day Generative AI is not “intelligent”, it is math determining the most probable response to a query. In a training exercise a month or so ago I was working with an AI solution that missed a malicious file being added to a directory. For kicks, I asked how the file got there and the tool came up with a fictitious story. In all discussions regarding AI and automation there is a general agreement that there needs to be a “human in the loop”.
We had the same concern with more standard automation or earlier ML tools. One concern raised at a symposium a month ago was that, given the nature of GenAI, the human in the loop may not be able to parse how the AI came to the conclusion it did. Experience and critical thinking will be crucial.
How should enterprises rethink threat modeling now that attackers are using AI for reconnaissance and exploitation?
Since potential threats are now being launched with the help of AI, threat models will need to recognize when potential threats are originating from AI rather than from more organic ways that in the past could be more easily detected.
The most common malicious use of AI (and from at least a few sources, the most common use of AI, period) has been in social engineering attacks. AI can be used to generate convincing, personal phishing attempts at scale. For a spearphishing attack it can be trained to mimic the writing style of someone trusted by a particular target. For vishing attacks AI can be used to mimic the voice of someone a target might expect to talk to. It will be interesting when pentesters are attacking AIs, or AIs are used to attack each other. Depending on the randomness settings of an AI, an attack may never be 100% successful. The reverse of that is no defense can be perfect, but it never is.
Are there specific controls or configurations enterprises can apply in their SaaS or IaaS environments to reduce AI-driven risk?
To reduce AI-driven risk in SaaS or IaaS environments, enterprises should implement a range of proactive controls and configurations focused on resilience, detection, and preparedness. AI-powered endpoint detection and response (EDR) tools are essential—they continuously monitor and analyze activity across endpoints, automatically flagging and isolating unusual or malicious behavior.
Organizations should also invest in training their teams to recognize AI-enhanced threats, such as deepfakes and sophisticated phishing attempts. Frequent, focused, and AI-aware security awareness sessions are key to maintaining a well-informed workforce. Deploying deception technologies like decoy systems, fake credentials, and honeypots can help detect and slow attackers early in the intrusion process, while also providing insight into their tactics. Regular tabletop simulations, especially those centered on AI-enabled breach scenarios, ensure security teams are prepared to respond effectively under pressure.
In addition, enterprises must prioritize continuous security investment and research. This includes routine audits of AI models, staying agile in response to evolving best practices, and fostering collaboration across security, legal, and operations teams. These measures collectively help strengthen the organization’s ability to anticipate, detect, and respond to AI-driven threats in dynamic cloud environments.
Do you see gaps in cloud provider shared responsibility models when it comes to AI security?
The gaps exist when companies do not treat their AI deployments as extensions of their cloud environments, and don’t provide them with the same strict security measures. Organizations must focus on endpoint security as well as data provenance, access control, and fine-grained permissions around AI usage. When it comes to AI security, it is crucial to monitor data flows, track model versions, and log interactions to identify potential breaches.
Source link