PickleScan 0-Day Vulnerabilities Enable Arbitrary Code Execution via Malicious PyTorch Models

PickleScan 0-Day Vulnerabilities Enable Arbitrary Code Execution via Malicious PyTorch Models

PickleScan 0-Day Vulnerabilities

Multiple critical zero‑day vulnerabilities in PickleScan, a popular open‑source tool used to scan machine learning models for malicious code.

PickleScan is widely used in the AI world, including by Hugging Face, to check PyTorch models saved with Python’s pickle format.

Pickle is flexible but dangerous, because loading a pickle file can run arbitrary Python code. That means a model file can secretly include commands to steal data, install backdoors, or take over a system.

Malicious PyTorch Models Trigger Code Execution

JFrog’s team found that attackers could use these flaws to bypass PickleScan’s checks and still run malicious code when the model is loaded in PyTorch.

Official documentation of Python’s pickle module with a user warning
Official documentation of Python’s pickle module with a user warning

The first bug, CVE‑2025‑10155, lets attackers dodge scanning by simply changing the file extension.

A malicious pickle file renamed to a PyTorch‑style extension like .bin or .pt can confuse PickleScan, causing it to fail to analyze the content. At the same time, PyTorch still loads and runs it.

google

CVE ID Vulnerability Name CVSS Score Severity
CVE-2025-10155 File Extension Bypass 9.3 Critical
CVE-2025-10156 CRC Bypass in ZIP Archives 9.3 Critical
CVE-2025-10157 Unsafe Globals Bypass 9.3 Critical

The second bug, CVE‑2025‑10156, abuses how ZIP archives are handled by corrupting the CRC (integrity check) values inside a ZIP file.

Attackers can cause PickleScan to crash or fail, but PyTorch may still load the model from that same broken archive. This creates a blind spot where malware can hide.

Proof of Concept – how file extension allows to bypass detection
Proof of Concept – how the file extension allows bypassing detection

The third bug, CVE‑2025‑10157, targets PickleScan’s blocklist of “unsafe” modules by using subclasses or internal imports of dangerous modules like asyncio.

Attackers can slip past the “Dangerous” label and only be marked as “Suspicious,” even though arbitrary commands can still be executed.

Because many platforms and companies rely on PickleScan as a main defense layer, these flaws create a serious supply chain risk for AI models.

Catalog provides precise information about the model and the evidences found inside
The catalog provides precise information about the model and the evidence found inside

JFrog’s team reported the flaws to the PickleScan maintainer on June 29, 2025, and fixed them in version 0.0.31, released on September 2, 2025.

Users are urged to upgrade immediately and, when possible, avoid unsafe pickle‑based models. Use layered defenses such as sandboxes, safer formats like Safetensors, and secure model repositories.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link