Why cybersecurity cannot hire its way through the AI era

Why cybersecurity cannot hire its way through the AI era

The cybersecurity industry has been battling a talent shortage and skills gap for years. Meanwhile, organizations need a new way to approach risk management proactively and more effectively. AI seems the clear answer to both.

Open tech roles are trending down or flat, while demand for AI skills is climbing fast. It’s structural change that shows today, automation is no longer optional. The speed, scale and complexity of modern threats have already outpaced manual processes, and AI is the only viable way to keep up. That’s why humans, armed with AI, make such a powerful combination. Because the need for speed and precision – combined with a lack of skilled manpower – forces us to rethink our cybersecurity workflows. AI agents will take over high-volume, repetitive tasks — continuously analyzing vast streams of telemetry, correlating signals across environments, and surfacing the handful of risks that truly matter. They will identify the needle in the haystack.

Over nearly three decades of my career in tech and cybersecurity, I’ve witnessed how every technological shift redefines the workforce. The AI boom is simply the latest cycle. I am sure that centuries ago, it was someone’s job to chisel manuscripts on stone slabs. When ink and paper was invented, the world evolved, the workforce changed, and we all survived. In the short term, adopting new technology may reduce demand for some roles. Yet over time, AI-driven productivity gains will help businesses grow and create new positions that did not exist before. We’re already witnessing the creation of entirely new disciplines in model evaluation, orchestration, and AI security. In the short-term, some people might find themselves out of a job if they don’t reskill, but ultimately, the workforce must adapt.

But how are organizations supposed to fill AI roles if no one has expertise in the new field? These roles are hard to fill because they require experience across the AI lifecycle: data sourcing, training, evaluation, deployment, and monitoring, plus the judgment to defend those systems when attackers aim at the model rather than the application. You don’t learn that from tutorials; you earn it by deploying AI systems into production, getting attacked, and iterating controls.  Deloitte’s analysis framed the paradox succinctly: the very AI that accelerates operations introduces shadow usage, agentic autonomy, and data leakage risks that must be actively governed.

Here’s the good news: CISOs don’t have to fix everything. When everything is critical, nothing is. The most effective cybersecurity programs focus on reducing the risks that matter most to the business. A Risk Operations Center (ROC) provides the framework to do exactly that: consolidating risk factors, applying business-driven prioritization, and orchestrating remediation. Unlike a Security Operations Center (SOC), which is focused more on analyzing incidents when they happen and responding appropriately, the ROC takes a proactive, future-looking approach to reduce the risk of a catastrophic cyber event happening to the business. And agentic AI can take risk orchestration to the next level, by automating threat prioritization and guiding remediation strategies aligned to an organization’s unique risk posture. The AI-native ROC shifts organizations from reactive firefighting to proactive risk management, ensuring security keeps pace with AI-driven innovation.

The truth is: everything we do in cybersecurity is about risk management. The ROC is not just for security. It connects CISOs, CIOs, CFOs, business unit leaders, and boards around a single view of risk—bridging priorities, aligning decisions, and creating accountability across the business. Boards are optimizing for ROI and resilience simultaneously. They recognize AI’s productivity upside but also expect security leaders to connect spend to business outcomes: fewer material incidents, faster reporting, lower exposure, and demonstrable business continuity. The question is no longer “How much did we spend?,” but “What risk did we measurably reduce?” Therefore, our hiring strategies must align to outcomes, not headcount: upskill the workforce, redeploy cross‑functionally, and use security platforms with embedded, governed AI capabilities over needing more headcount to manage tool sprawl. That efficiency helps the business strengthen its top line, leading to expansion and ultimately fuels future hiring.

We also need to confront a hard truth: AI‑generated code is often insecure. Multiple studies in 2025 found about 45% of AI‑generated code contains security flaws, with particularly weak defenses against cross‑site scripting and log injection. If AI becomes the default author of routine code, we will ship vulnerabilities faster than human review can catch them, unless we embed security in the pipeline. This means enforcing mandatory reviews, scanning both code and binaries continuously, gating high‑risk changes behind human approval, and logging agent actions like we log privileged users.

The paradox isn’t paradoxical at all. AI is compressing some job categories while expanding others and raising the bar for everyone. Cybersecurity leaders who embrace that reality will be best positioned to deliver resilience, regulatory readiness, and growth in 2026.

Sumedh Thakar is the president and CEO of Qualys, a leading cybersecurity company, and is passionate about making the digital world safer. He joined Qualys in 2003 as an engineer and rose through leadership roles including chief product officer and president, helping expand the Qualys platform with integrated capabilities and scaling global engineering teams. He is a co-inventor on five U.S. cybersecurity patents and previously held engineering roles at Intacct and Northwest Airlines.

Written by Sumedh Thakar

Sumedh Thakar is the president and CEO of Qualys, a leading cybersecurity company, and is passionate about making the digital world safer.



Source link