AI built it, but can you trust it?

AI built it, but can you trust it?

In this Help Net Security interview, John Morello, CTO at Minimus, discusses the security risks in AI-driven development, where many dependencies are pulled in quickly. He explains why it’s hard to secure software stacks that no one fully understands. He also shares what needs to change to keep development secure as AI becomes more common.

We’re seeing AI-assisted development pull in hundreds of dependencies from diverse sources at speed. From your perspective, what’s the most urgent security challenge this creates in practice?

In many ways, this isn’t really a new problem. Over the past decade or so, development practices have become increasingly dependent upon an ever expanding set of abstract app frameworks each which have their own collections of packages. In many cases, intending to install just a single package may result in installing dozens of 2nd and 3rd order dependencies and it’s not viable for developers to understand each of them in detail. Thus, developers often end up running a huge stack of software to support their app functionality, with limited understanding of the risk it presents.

What AI worsens here is the expectation of speed and the corresponding diminishment of time to understand what’s being deployed. If developers already struggled to understand the stack they were figuring out on their own, imagine how unlikely it is they will understand the stack that an AI pulls in as part of an app it’s used on. It’s realistically impossible to secure what you don’t understand and my biggest concern with AI’s usage in development is decreasing the ratio of what we actually understand to the larger set of dependencies involved in running a given app.

What changes do you think need to happen in open source project governance or metadata standards to keep up with AI-integrated development workflows?

AI isn’t inherently bad nor inherently good from a security perspective. It’s another tool that can accelerate and magnify both good and bad behaviors. On the good side, if models can learn to assess the vulnerability state and general trustworthiness of app components, and factor that learning into code they suggest, AI can have a positive impact on the security of the resultant output. Open source projects can already leverage AI to help find potential vulnerabilities and even submit PRs to address them, but there still needs to be significant human oversight to ensure that the results actually improve the project’s security. Many open source projects still struggle to get that security knowledge within their contributors and can sometimes deprioritize security in favor of feature or schedule benefits – AI can’t directly help with that decision making but hopefully can be used to make the decision making more data driven.

Do you think current AppSec tools are equipped to remove vulnerabilities introduced by AI-generated code during the build phase? Why or why not?

I don’t have great confidence in tooling being able to automatically find and fix vulnerabilities even in code written by humans. Certainly, there is great benefit to using automated linters and similar tools to find common mistakes and that can be accelerated by AI. However, many of the most critical vulnerabilities are fairly deep bugs that aren’t easily seen without strong understanding of an app’s overall behavior and often need to be combined with other environmental aspects to be fully exploited. Those kinds of bugs are far harder for AI to find and fix today, though its certainly possible that as models can be purposefully trained for this, that the results can improve.

How do you think about “trust boundaries” in a world where AI systems can generate infrastructure-as-code, config files, and application logic that all include open source components? What does a secure-by-default AI-assembled stack look like to you?

It shouldn’t look much different than the same stack assembled by a human expert. Regardless of how these artifacts are created, the security best practices are the same. My biggest concern with AI is that it makes truly hard things seem too easy. This can be very valuable when it helps remove repetitive tedious tasks but it can be risky when it results in deployments that no one really understands.

If you simply trust an AI to generate all the artifacts needed to build, deploy, and run anything sophisticated it will be very difficult to know if it’s done so well and what risks it’s mitigated. In many ways, this looks a lot like the classic “curl and pipe to bash” kinds of risks that have long existed where users put blind trust in what they’re getting from external sources. Many times that can work out fine but sometimes it doesn’t.

If you had to advise a CISO or DevSecOps lead right now, what’s one thing they should start doing today, and one thing they should stop, to secure AI-driven development environments?

The main thing I’d advise is to use AI as an accelerant, and enabling tool, but not as a crutch or replacement for human knowledge and decision making. AI can create impressive results quickly but it doesn’t necessarily prioritize security and may in fact make many choices that degrade it. Have good architectures and controls and human experts that really understand the recommendations it’s making and can adapt and re-prompt as necessary to provide the right balance.


Source link