In this Help Net Security interview, Kristian Kamber, CEO at SplxAI, discusses how security challenges for GenAI differ from traditional software. Unlike predictable software, GenAI introduces dynamic, evolving threats, requiring new strategies for defense and compliance.
Kamber highlights the need for continuous monitoring and adaptive security measures.
How do the security challenges of GenAI applications diverge from those of traditional software systems?
Defending GenAI applications is like moving from a fixed castle to a living, breathing maze that shifts at will. Traditional software is predictable, with defined inputs, outputs, and pathways—walls you can fortify. GenAI, however, introduces a dynamic, evolving attack surface because the AI doesn’t just process data; it learns from it, adapts, and generates its own outputs.
With GenAI, we’re not just defending against unauthorized access or data breaches. We’re grappling with attacks on the AI’s very understanding of the world. There’s a wide array of attack vectors here. You’ve got model inversion attacks where adversaries extract sensitive data from the model itself, training data poisoning that corrupts the system’s outputs, and prompt injection attacks that trick the model into misbehaving. And if the model is large and pre-trained, it’s like trying to guard a factory that runs itself—dangerous precisely because of how complex it is to control.
This requires us to rethink what security means. It’s no longer enough to simply patch vulnerabilities in the code; we have to continuously monitor the data, the outputs, and even the way the model evolves over time. We’re defending against prompt injection, hallucinations, and inference attacks—threats that didn’t exist in the traditional software security playbook.
And let’s not forget model drift: as the AI continues learning, small deviations in its responses could signal malicious tampering. Security in this space is about continuously validating both the inputs and outputs, ensuring that we’re not just guarding static walls but an evolving fortress.
Which attack surfaces specific to GenAI applications do you find most alarming, and how do you anticipate these vulnerabilities will develop as AI technology advances?
The most concerning attack surface right now? Data poisoning. It’s like someone spiking the water supply at the source. Once the data feeding your GenAI model is corrupted, the AI learns all the wrong lessons. It starts generating flawed outputs, and the worst part? You may not even notice until it’s too late.
Then, there’s the growing risk of model extraction and reverse-engineering. There are already rumors in the industry that APTs are engaging in such tactics, using their substantial computational power to peel back the layers of these models and extract sensitive data, including proprietary intellectual property or confidential training data. It’s like cracking a safe by slowly figuring out the combination through brute force—except the “safe” in this case is a highly complex neural network.
As AI technology continues to advance, these risks are only going to get worse. AI will be increasingly embedded in critical infrastructure, from healthcare to financial systems. The potential consequences of these models being tampered with—whether through data poisoning, reverse engineering, or adversarial attacks—could be catastrophic. We’re heading into an era where the model itself becomes the crown jewel for attackers, rather than just the data it holds.
In the context of GenAI, how are compliance frameworks and regulatory requirements shaping security practices, and what do you foresee as the biggest hurdles for organizations?
Right now, compliance and regulation around GenAI feel a bit like the Wild West, but there’s movement. We’re starting to see AI-specific guidelines from organizations like BSI, CSA, MITRE, NIST, and regulatory efforts such as the EU AI Act. These rules are beginning to shape how we approach GenAI security. They’re pushing for transparency, which is huge because traditional “black-box” AI systems don’t fly anymore. We need to know how decisions are being made, which forces companies to build more accountable and auditable systems.
The hurdle? Organizations now need to understand the full AI lifecycle. It’s not just about securing the final output but securing the data pipeline, training models, ensuring model explainability, and monitoring post-deployment behavior. That’s a big shift in mindset and resources for a lot of companies. Compliance demands often move faster than an enterprise’s security culture, so many are playing catch-up to align their practices with these evolving standards. To stay ahead, organizations should consider creating dedicated task groups for AI security, composed of diverse personas and profiles—ranging from data scientists and cybersecurity experts to legal and compliance professionals—to tackle the multifaceted nature of AI risks.
Another issue is interpretability—how do you explain a neural network’s decision in a way that regulators will understand? That’s going to be a headache for many. The idea of holding a machine accountable to human standards is still evolving.
Could you elaborate on the strategic objectives behind automating penetration tests for next-generation GenAI applications? How do these objectives align with the broader goals of enterprise security?
Automating pen tests for GenAI is about keeping up with the speed of evolution in these models. The strategic objective here is to catch vulnerabilities as models, features, and system prompts evolve. AI models change far more frequently than traditional software systems—sometimes daily, with updates to the models, data inputs, or even prompts that shift based on real-time feedback.
By automating pen tests, we can continuously probe for weaknesses without slowing down development. It’s not just about finding vulnerabilities—it’s about keeping security aligned with the fast pace of GenAI iteration. Enterprises need a system that ensures they’re not accidentally introducing vulnerabilities every time they tweak a model, add a new feature, or modify system prompts. Automation allows security teams to integrate these tests into the development cycle, enabling continuous protection.
This approach aligns perfectly with the broader enterprise goal of staying agile while minimizing risk. You’re ensuring security without making it a bottleneck, which is essential when deploying AI at scale.
When comparing automated AI-driven penetration tests to traditional manual approaches, where do you see the most significant differences in efficacy and reliability? Are there scenarios where one approach outperforms the other?
You’ve got to use both, no question about it. Automated pen testing is your efficiency machine—it helps optimize time and cost. When you’re working with constantly evolving AI systems, you can’t afford to manually test every iteration. Automated pen testing is highly effective for catching both complex vulnerabilities and low-hanging fruit, while ensuring that deploying new models or changing system prompts doesn’t introduce unexpected risks, all at a speed and scale that manual testing simply can’t match.
But there’s a catch: automation lacks creativity. It follows rules and patterns, which is perfect for efficiency but not so great for spotting the unexpected. That’s where manual pen testing comes in—it’s better for creative, complex attacks that require human intuition. Some of the most devastating attacks are multi-layered and subtle, the kind of thing that only a human tester can dream up.
So, while automation helps with speed and maps vulnerabilities to compliance frameworks—heck, it can even automate some mitigation strategies—you still need that human touch. You need both for a full security sweep. Think of automation as your constant, fast-moving shield, while manual testing is your precision strike, catching those hard-to-spot flaws that could bring the whole system down.
Let’s discuss the critical importance of AI red teaming. What makes this skill set and service offering indispensable for organizations aiming to secure their AI-driven initiatives?
AI red teaming is absolutely crucial. It’s not enough to just defend your system—you need to actively attack it the way a real adversary would. This is where red teaming comes in, stress-testing your AI systems to reveal weaknesses that might not be apparent with standard testing methods.
At SplxAI, we’ve discovered some pretty surprising vulnerabilities. For example, popular chatbots that integrate dates into their system prompts are actually more vulnerable on specific dates. Attackers can manipulate these date-related prompts to trigger unexpected behavior in the AI. This is just one example of the millions of potential vulnerabilities out there, and it shows how subtle these attacks can be.
Securing AI is going to be an exponentially more complex task than securing cloud infrastructure was. AI systems have more moving parts, more dependencies, and more unknowns. You’re not just securing a static system—you’re securing a dynamic, evolving one. Red teaming forces you to think beyond traditional defenses, uncovering the blind spots in a system that could be exploited when it matters most.