You probably think twice before downloading a random app or opening an unfamiliar email attachment. But how often do you stop to consider what happens when your team downloads and loads a machine learning model?
A recent study shows why you should. Researchers from Politecnico di Milano found that loading a shared model can be just as risky as running untrusted code. In their tests, they uncovered six previously unknown flaws in popular machine learning tools. Each one could let an attacker take control of a system the moment a model is loaded.
These findings reveal a new type of supply chain threat, one that hides inside the very models organizations are eager to adopt.
Security controls are uneven
The researchers looked at the tools used to build and save machine learning models and the hubs where they are shared. They found that security controls vary widely. Some platforms scan files for known threats, while others rely on isolated environments or simply trust users to handle risks themselves.
Even when tools include security settings, they may not work as expected. Some formats are promoted as safer because they are based on data rather than code. In practice, the study found these formats can still allow attackers to run harmful code depending on how they are processed.
The gaps are especially concerning because adoption of secure versions of these tools is slow. The researchers saw that older versions of machine learning frameworks were downloaded far more often than newer ones that contain security updates. This mirrors a common problem in software security where legacy systems remain in use long after fixes are available.
Perception does not match reality
The team also surveyed 62 machine learning practitioners to understand how they view these risks. Among those who regularly work with models, 73 percent said they feel more comfortable loading models from well-known hubs that promote built-in security scanning.
However, this trust is often misplaced. In some cases, the study showed that security scanning tools failed to detect malicious models. In other cases, files were labeled as safe simply because the scanning tool did not support their format. This mismatch between perception and actual protection can lead to overconfidence, leaving systems exposed.
Steps CISOs can take
CISOs should treat machine learning models like any other piece of code that enters their environment. They should push their teams to:
- Use trusted sources for machine learning models and verify their origins.
- Maintain strict isolation when testing or deploying new models.
- Keep frameworks and related tools up to date to reduce exposure to known flaws.
- Set policies for how models are scanned and approved before use, and review how well those policies are working.