Cyberscoop

Can Zero Trust survive the AI era?


For the past decade, cybersecurity experts in the federal government have argued that trust, or a lack of it, was key to developing effective security policies for agency systems and data.

But today, cybercriminals and state-sponsored hackers are using artificial intelligence to develop and launch cyberattacks more quickly and efficiently. Governments and businesses are facing pressure to adopt AI-powered cybersecurity defenses,  along with security architectures that delegate key security decisions to AI agents.

Jennifer Franks, Director of the Center for Enhanced Cybersecurity at the Government Accountability Office, said federal agencies were currently grappling with how to do both.

“We’re having to consider a two-in-one approach,” Franks said Thursday at the Elastic Public Sector Summit presented by FedScoop. “It’s not something that we have to consider as a tool that’s nice to have, it’s a needed necessity right now in an environment to really look at the best practices for really anticipating the adversaries that could target your environment.”

Zero Trust – a set of security principles with roots in older cybersecurity concepts like “least privilege access” — essentially argues that defenders should treat everything on their network as a potential compromised asset. Thus, everything requires constant verification of identity, access, and authorization to protect from hackers, data breaches and insider threats.

But threat researchers are reporting that malicious hackers have been able to leverage AI-driven automation and scaling to significantly increase the speed of their attacks, making it increasingly difficult for human operators on the defensive side to keep up or make decisions in real time.  

At the same event Mike Nichols, general manager for security solutions at Elastic, said his company and other threat research firms have found that AI tools have helped drive down the time it takes to execute an attack and gain access to an organization’s network to around 11 minutes.

Other metrics over the past year point to a lowered barrier for malicious hackers, including an 80-90% decrease in the cost to develop custom malware and a 42% increase in exploitation of zero days before public disclosure.

He argued that cybersecurity defenders will need to embrace AI to defend at similar speeds, going so far as to say “if you’re not using it, you are going to be compromised…like that is a guarantee at this point.”

Nichols said that despite what “disingenuous vendors” may promise, there is currently no technology or process that can provide an organization with genuine, agentic, autonomous cybersecurity operations. Human operators can still control critical decisions made by AI agents through planning on the front end.

“The bottom line is these things are executing your existing processes and adding some reasoning to it,” he said. “And so…you have to have a well-oiled process and documented process.”

Cybersecurity veteran and author Chase Cunningham — who has earned the nickname “Dr. Zero Trust” for his advocacy of the principles – told CyberScoop that agentic AI can “absolutely” co-exist within a Zero Trust security architecture, as long as you treat agents like any other non-human identity in an enterprise.

He said that network microsegmentation, strict account controls, and continuous logging all align with Zero Trust principles and would limit the potential damage an AI agent could cause.

“It is just another entity on the network that needs to be explicitly known, verified, constrained, monitored, and governed,” he said. “If you do not know what model it is, what data it can access, what systems it can call, what actions it can take, and under what conditions it can do those things, then you have introduced ambiguity into the environment. And ambiguity is exactly what Zero Trust is supposed to remove.”

But Nichols said humans should always be in the loop when agents make decisions on their behalf, and said AI vendors had an equal responsibility to provide more transparency behind the products they’re selling.

“You can’t have a black box anymore, you can’t have an AI that says ‘hey, we fixed it, I’m not going to explain why that’s the case,’” said Nichols. “By design you need to find a vendor that’s open API [and who can provide] explainability, the work that has to be there.”

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.



Source link