- AI Vulnerability Management Should Start With Security Basics
- Organisations Must Prepare to Handle AI-Discovered Vulnerabilities
- Data Exposure and Infrastructure Risks Remain Major Concerns
- Human Expertise Still Critical in AI Vulnerability Management
- Long-Term Planning Needed as AI Models Evolve
- Related
The growing use of AI vulnerability management tools is changing how organisations identify security flaws, but the UK’s National Cyber Security Centre (NCSC) has warned that companies must not rush into adopting artificial intelligence without understanding the risks and operational challenges involved.
In a detailed advisory, Ruth C, Head of Vulnerability Management Group at the NCSC, outlined 10 critical questions organisations should ask before using AI models to identify vulnerabilities in systems, software, and infrastructure. The guidance comes as businesses increasingly face pressure to adopt AI-driven security tools amid rising cyber threats and growing board-level focus on cyber resilience.
The NCSC said that while AI can help improve security capabilities, simply finding vulnerabilities does not automatically make an organisation safer. In some cases, poor implementation of AI systems could even introduce new risks.
AI Vulnerability Management Should Start With Security Basics
A key message from the guidance is that organisations should prioritise cyber hygiene before investing heavily in AI vulnerability management solutions.
According to the NCSC, unpatched systems and weak access controls remain far more dangerous than many advanced zero-day threats. The agency stressed that businesses should first understand their IT estate, software dependencies, and patching processes before relying on AI tools to uncover vulnerabilities.
The advisory noted that thousands of vulnerabilities are reported every year, but only a relatively small percentage are actively exploited by attackers. The NCSC referenced data showing that more than 40,000 vulnerabilities were assigned CVEs in 2025, while only a fraction appeared in exploitation tracking systems such as the Known Exploited Vulnerabilities (KEV) catalog.

This highlights why prioritised patching and effective remediation remain central to strong cybersecurity practices.
Organisations Must Prepare to Handle AI-Discovered Vulnerabilities
The NCSC warned that companies adopting AI vulnerability management tools need a mature process for handling the large number of findings these systems can generate.
Security teams must be able to receive, prioritise, assess, and fix vulnerabilities without overwhelming operational teams. The guidance also emphasised the importance of addressing the root cause of vulnerabilities instead of only fixing individual flaws.
The agency encouraged organisations to develop structured vulnerability management processes and maintain clear workflows for remediation and patch deployment.
Data Exposure and Infrastructure Risks Remain Major Concerns
The guidance also highlighted several risks associated with using AI models for vulnerability discovery.
One of the biggest concerns is data exposure. Organisations may unknowingly provide AI platforms with access to sensitive code repositories, internal documentation, historic bug reports, or even production systems.
The NCSC advised organisations to carefully assess how AI systems are deployed, what permissions they receive, and whether infrastructure is properly sandboxed. Businesses were also urged to review data retention policies, legal obligations, and jurisdictional issues before using hosted AI models.
The advisory specifically asked organisations to consider questions such as whether the AI system can access production environments, how infrastructure will be secured, and whether the organisation understands the terms and conditions attached to AI services.
Human Expertise Still Critical in AI Vulnerability Management
While AI tools are becoming more capable, the NCSC made clear that they are not a replacement for cybersecurity professionals.
The guidance stated that AI models should be viewed as tools that enhance the capabilities of security teams rather than replace them. Organisations were encouraged to invest in skilled cybersecurity staff who can validate AI-generated findings and interpret results accurately.
The NCSC also recommended combining AI analysis with human verification to reduce false positives and improve the reliability of vulnerability assessments.
Long-Term Planning Needed as AI Models Evolve
The advisory stressed that organisations must prepare for rapid advancements in AI cybersecurity capabilities over the coming years.
The NCSC believes frontier AI developments will play a major role in cyber resilience throughout the next decade. As new models emerge with evolving capabilities, organisations will need long-term strategies for managing resources, updating security workflows, supporting customers, and responding to vulnerabilities discovered in third-party products and services.
The agency also emphasised the importance of strong asset management and dependency management practices, noting that organisations should have a clear understanding of all systems, libraries, and services operating within their environments.
As interest in AI vulnerability management continues to grow, the NCSC’s guidance serves as a reminder that AI adoption in cybersecurity requires careful planning, governance, and operational maturity rather than quick deployment driven by hype alone.

