
The problem is that most organizations might detect poisoning-related problems, but not the source of those problems. “If you had a leak in your house, and it was coming out in your basement, and it was coming out in your closet, your bathroom, and your bedroom, you assume that you have 12 leaks,” Meyers says. “But there could be one pipe that’s causing all of those leaks.”
What security leaders should do
There is no silver-bullet product for AI data poisoning, and most CISOs looking for one are asking the wrong question. The immediate challenge is far less glamorous: understanding what data the model trusts, who controls that data, and whether the enterprise is already feeding its own systems bad information.
“The thing I see continuously at this point is they’re struggling with which data sources to input, which are the ones that are most reliable, and how do we keep that up to date?” SANS’ Lee says.
SANS’ Cochran suggests CISOs also need to stop thinking only about the foundational model and start mapping every place AI gets context. “At any place where a model interacts with data, you can have data or context poisoning,” he says.
