Surge in Cyber Attacks Targeting AI Infrastructure as Critical Vulnerabilities Emerge

Surge in Cyber Attacks Targeting AI Infrastructure as Critical Vulnerabilities Emerge

Security researchers discovered 28 distinct zero-day vulnerabilities, seven of which were expressly directed at artificial intelligence infrastructure, in a startling discovery made during the 2025 Pwn2Own Berlin event, which was organized by Trend Micro’s Zero Day Initiative.

This inaugural AI category focused on developer toolkits, vector databases, and model management frameworks, highlighting the fragility of systems underpinning large language models and agentic AI applications.

Exploits against Chroma DB, an open-source vector database integral to retrieval-augmented generation, exploited lingering development artifacts, enabling unauthorized data access and potential system compromise.

Zero-Day Weaknesses in AI Ecosystems

Similarly, multiple teams chained vulnerabilities in NVIDIA’s Triton Inference Server, leveraging unpatched bugs to load arbitrary data, underscoring the perils of interdependent components in Kubernetes-deployed environments.

Wiz Research’s use-after-free exploit in Redis’s vector storage capabilities, stemming from an outdated Lua subsystem, further emphasized the risks of unmaintained third-party libraries, while their attack on the NVIDIA Container Toolkit revealed flaws in external variable initialization within containerized setups.

successfully used a UAF vulnerability against Redis

Trend Micro’s surveys detected over 200 unprotected Chroma servers and thousands of exposed Ollama instances, amplifying real-world exposure risks despite no competition attempts on the latter due to its rapid update cycle.

Criminal Weaponization Escalates

Beyond traditional software flaws, AI-specific vulnerabilities are proliferating, as seen in CVE-2025-32711, a high-severity AI command injection in Microsoft 365 Copilot that risked sensitive data exfiltration.

Trend Micro’s Pandora proof-of-concept agent demonstrated indirect prompt injections via malicious web content or database queries, bypassing guardrails to enable data leaks or SQL injection despite user restrictions.

Advanced prompt attacks, including Chain of Thought exploitation in models like DeepSeek-R1, Link Traps for phishing, invisible Unicode injections, and Prompt Leakage (PLeak) for system prompt disclosure, are evolving rapidly, with success rates exceeding 50% across major LLMs.

AI Infrastructure
 Security challenges and recommended controls for typical components of an LLM-driven AI agent

Cybercriminals are leveraging these for strategic gains, using generative AI for automated translation in phishing, romance scams, and business email compromise, while deepfakes facilitate virtual kidnappings, sextortion, and eKYC bypasses on cryptocurrency platforms.

Underground markets offer jailbreak-as-a-service and tools like Deep-Live-Cam, pitting AI against AI in verification systems, with bypass services costing up to $600.

According to the report, As agentic AI advances toward autonomous, multi-step reasoning and self-learning, new risks emerge from exposed APIs, third-party tools, and potential agent meshes, demanding zero-trust architectures and layered security across the AI lifecycle.

Trend Micro’s initiatives, including the open-source Cybertron agent integrated with Vision One for automated threat response and collaborations like CoSAI for AI supply chain security, aim to mitigate these.

Best practices involve rigorous inventories of software components, regular audits, input validation, and red teaming to preempt exploits.

With 93% of security leaders anticipating daily AI attacks and 66% viewing AI as cybersecurity’s top disruptor, organizations must embed proactive measures to balance innovation with resilience against an escalating threat landscape.

Find this News Interesting! Follow us on Google News, LinkedIn, and X to Get Instant Updates!


Source link