A newly discovered vulnerability in the AI supply chain—termed Model Namespace Reuse—permits attackers to achieve Remote Code Execution (RCE) across major AI platforms, including Microsoft Azure AI Foundry, Google Vertex AI, and thousands of open-source projects.
By re-registering abandoned or deleted model namespaces on Hugging Face, malicious actors can trick pipelines that fetch models by name into deploying tainted repositories, compromising endpoint environments and granting unauthorized access.
Trusted model names alone are insufficient; organizations must urgently reassess AI security practices
Hugging Face hosts AI models as Git repositories identified by an Author/ModelName namespace. When an author account is deleted or a model’s ownership is transferred, those original namespaces return to an available pool.
Malicious models could result in a range of unintended outcomes, from incorrect diagnoses to ongoing unauthorized access by an attacker on affected systems.
Model Namespace Reuse exploits this by allowing anyone to re-register the abandoned namespace and recreate its path. Pipelines that reference models strictly by name will then fetch the attacker’s model instead of the original.
How Developers Pull Models
Developers typically use code like:
pythonfrom transformers import AutoModel
model = AutoModel.from_pretrained("AIOrg/Translator_v1")
This two-part convention (author and model name) assumes the namespace remains controlled by the original publisher.
Without lifecycle controls, however, an abandoned namespace can be hijacked, silently replacing the legitimate model in downstream deployments.
Google Vertex AI
Vertex AI’s Model Garden integrates Hugging Face models for one-click deployment. Our team found several “verified” models whose original authors had been deleted on Hugging Face. By re-registering one such namespace and uploading a backdoored model, we embedded a reverse shell payload.

Upon deployment in Vertex AI, we gained shell access to the containerized endpoint. Google now scans daily for orphaned namespaces, marking them “verification unsuccessful” to block deployment.
Azure AI Foundry
Azure AI Foundry’s Model Catalog similarly sources models from Hugging Face. We identified reusable namespaces where authors were removed but models remained deployable.
Registering an unclaimed namespace and uploading a malicious model yielded a reverse shell at the endpoint, giving initial foothold into the Azure environment. Microsoft has since been notified and is evaluating protective measures.
Open-Source Repositories
Scanning GitHub for code that fetches Hugging Face models uncovered thousands of repositories referencing vulnerable Author/ModelName identifiers.

Popular projects hard-coded default models that attackers could reclaim, turning downstream deployments malicious.
Model Registry Supply Chain
Beyond direct Hugging Face pulls, secondary registries—like Kaggle’s Model Catalog and other private model registries—can ingest vulnerable models.
Users fetching from these registries inherit the same risk, never interacting directly with Hugging Face yet still exposed.
Scenario | Cause | User Experience | HTTP Status Code |
---|---|---|---|
Ownership Deletion | Author account deleted | Model returns 404 (downtime) | 404 |
Ownership Transfer | Model transferred, old author account deleted | Requests redirect (no downtime) | 307 |
In transfer scenarios, redirection masks the risk until an attacker reclaims the old namespace and breaks the redirect.
Mitigations
To secure AI supply chains, organizations should:
- Version Pinning: Specify a commit hash when calling
from_pretrained("Author/ModelName", revision="abcdef1234")
to prevent fetching unexpected versions. - Model Cloning: Mirror trusted models into internal registries or storage after verification, eliminating live dependencies on external sources.
- Comprehensive Scanning: Treat model references as code dependencies by scanning repositories, documentation, default parameters, and docstrings for vulnerable namespaces.
Model Namespace Reuse is a systemic vulnerability in AI model distribution. Relying solely on namespace identifiers for trust is inadequate. Collaborative action—platform providers hardening namespace lifecycle policies and developers adopting stricter verification practices—is essential to safeguard AI ecosystems against supply chain attacks.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates.
Source link