CISOOnline

Anthropic ban heralds new era of supply chain risk — with no clear playbook

“You need a full AI security solution,” he tells CSO, arguing that AI systems are dynamic, with models, data, and behaviors that change over time, making static inventories insufficient without ongoing monitoring and governance. “You want complete visibility into your AI applications, your AI agents, your AI tools, your plugins, the data they’re accessing, everything around that whole infrastructure of AI that is being used to build your applications or agents. Once you do that, that’s discovery. It’s a good thing. It’s a start.”

A new category of supply chain risk

The Anthropic case represents a shift in how governments approach AI technologies, treating models and their associated ecosystems as supply chain components that can be restricted or removed.

For CISOs, the challenge is not simply responding to a single directive, but preparing for a future in which similar actions could be applied to other AI providers not only by the US government, but also by regulators and customers. That requires visibility into AI dependencies, clarity about how those dependencies are used, and a strategy for replacing them without disrupting critical systems.

As those expectations take shape, organizations are being asked to operate at a level of insight and control that many have not yet achieved. As Friedman cautions, “Everyone is moving quickly to build on these systems without really understanding what’s inside them.”



Source link