What’s Old is New: Network and Web Application Vulnerabilities
The first newsworthy AI breach of 2024 didn’t come from a mind bending prompt injection, it came from classic exploit tactics. As we see organizations everywhere testing LLM and AI products to see how they fit into their business, they are rapidly introducing new software and attack surface into environments. This is especially true as organizations attempt to limit public cloud based AI models (e.g. OpenAI) and instead use open source software, open source models or custom on-premise deployments. As a penetration testing team, we are beginning to see these products deployed on internal and external networks. Organizations should take care as these products often inherit all the classic vulnerabilities we’ve exploited on engagements in the past. Even more so because everything is moving so quickly.
The AI ecosystem’s continuing explosive growth in 2025 will dramatically expand the attack surface while inheriting traditional cybersecurity vulnerabilities. Supply Chain Concerns
Unfortunately, supply chain concerns hit on two fronts for AI. First, we see the same supply chain concerns that we are already dealing with throughout the industry; malicious packages, vulnerable dependencies, and insufficient Software Bills of Materials (SBOMs). For example, n8n (https://github.com/n8n-io/n8n), which is arguably the most popular agentic framework and has 50.8K stars on Github, has a dependency package lock file with 25,780 lines in it. While line count isn’t a perfect complexity metric, it illustrates a critical issue: these rapidly evolving tools depend on libraries from hundreds of different authors. In aggregate, with all of the tools being tested out across environments, this is an obvious ticking time bomb.
Second, there are supply chain risks with the models themselves. That is, a malicious actor who can poison a model and adjust the model’s decision making or privacy permanently destroys the products the model depends on. For example, ByteDance currently has a 1.1 million dollar lawsuit in place against an ex-intern who poisoned a large number of their models. Organizations need to be carefully verifying the providence of any models they deploy, as compromised or maliciously trained models could introduce backdoors or biases that are difficult to detect through conventional testing.
Both of these issues are so concerning they are already on the 2025 OWASP Top 10 for Large Language Model Applications (LLM05: Supply Chain Vulnerabilities). We are sure to see more of this in the coming year. Prompt Injection Evolution
While prompt injection attacks are well-documented, they’re likely to become more sophisticated. As LLMs are integrated into more complex systems, attackers will likely find new ways to craft inputs that manipulate the model’s behavior or extract sensitive information from its training data. At Sprocket we have already found this on a few different assessments. This is particularly concerning when LLMs are connected to internal systems, databases, and agentic frameworks.
Prompt injection is largely an unsolved problem and it’s going to get worse before it gets better. In 2025, we will see prompt injection used for more impactful and newsworthy exploits. Resource Consumption Attacks
LLMs face a critical yet overlooked vulnerability: resource consumption attacks. These threats extend beyond computational load to target financial resources, exploiting the per-token pricing models of LLM services. These systems are expensive to operate from a computational perspective and API cost issue. This is very different from most other cloud-based deployments. Cost related threats in 2025 are likely to become more real than in other deployed application stacks.
AI and LLM products are expensive to operate. We will see a rise threat model around cost and cost mitigation for AI deployed products.
Ad
Join over 500,000 cybersecurity professionals in our LinkedIn group “Information Security Community”!