Key Points
- A TeamPCP-linked forum account claims to be selling internal Mistral AI repositories.
- The post advertises roughly 5GB of files linked to AI training and inference projects.
- As of now, no public evidence confirms the authenticity of the alleged repositories.
- The claims surfaced days after the Mini Shai-Hulud supply chain attacks on npm and PyPI.
- TeamPCP has been previously linked to package poisoning attacks targeting AI infrastructure.
Only days after the Mini Shai-Hulud supply chain attack targeted npm and PyPI packages associated with French artificial intelligence company Mistral AI, a threat actor using the TeamPCP identity is now claiming to sell what appear to be internal company repositories and source code on a hacking forum.
The forum post, published a few hours ago under the TeamPCP name, advertises roughly 5GB of alleged internal repositories connected to both “mistralai” and “mistral-solutions.” The actor claims the archive contains around 450 repositories covering training systems, fine-tuning projects, benchmarking tools, dashboards, inference infrastructure, experiments, and future AI projects.
While the claims have not been independently verified, the listing includes dozens of repository names that appear consistent with internal engineering environments and enterprise AI development workflows. Examples shown in the post include “mistral-inference-internal,” “mistral-finetune-internal,” “chatbot-security-evaluation,” “devstral-cloud,” and “pfizer-rfp-2025.”
The threat actor is asking for $25,000 in exchange for the data, claiming the repositories would otherwise be leaked publicly within a week if no buyer is found. The post also states that the seller intends to provide the archive to only one buyer.
As Hackread.com reported earlier, TeamPCP was recently linked to the Mini Shai-Hulud campaign, a large-scale software supply chain attack that poisoned hundreds of npm and PyPI packages associated with projects including Mistral AI, TanStack, OpenSearch, UiPath, and Guardrails AI.
The attackers abused CI/CD publishing systems and hijacked OpenID Connect tokens to distribute malicious package updates through legitimate release mechanisms. The malware was designed to steal GitHub tokens, cloud credentials, CI/CD secrets, SSH keys, and developer environment data.
That earlier campaign already raised concerns about whether compromised developer credentials or publishing infrastructure could provide access beyond public package repositories. The latest forum claims now suggest the attackers may be attempting to monetize alleged internal development assets connected to AI infrastructure and enterprise tooling.
The forum post itself does not include downloadable samples or technical proof confirming access to the repositories. However, it references previous TeamPCP activity involving Lightning AI and instructs buyers to verify the group’s identity through prior attack notes and forum activity.
Sample Repo Names Shared by the Threat Actor
* finance.tar.gz
* typhoon.tar.gz
* turbine.tar.gz
* xformers.tar.gz
* dashboard.tar.gz
* website-v3.tar.gz
* devstral-cloud.tar.gz
* mistral-fabric.tar.gz
* kyc-doc-agent.tar.gz
* mistral-surge.tar.gz
* mistral-solutions.tar.gz
* finetuning-feedback.tar.gz
* surge-validators.tar.gz
* pfizer-rfp-2025.tar.gz
* mistral-common-internal.tar.gz
* mistral-compute-poc.tar.gz
* piper-segmentation.tar.gz
* mistral_finance_agent.tar.gz
* mistral-lawyer-internal.tar.gz
* mistral-finetune-internal.tar.gz
* chatbot-security-evaluation.tar.gz
* mistral-inference-private.tar.gz
* cma-customer-care-internal.tar.gz
* mistral-inference-internal.tar.gz
At the time of writing, Mistral AI has not publicly commented on the claims. There is also no public evidence confirming that the files, if authentic, originated from the company’s internal systems.
Even so, the situation suggests that attacks targeting AI software environments are moving beyond poisoned packages and stolen credentials, with threat actors now appearing to focus on internal development systems, enterprise tooling, and AI infrastructure.
As AI companies continue building cloud-hosted training, inference, and autonomous agent systems, developer credentials and CI/CD environments are becoming increasingly valuable targets for groups seeking access to intellectual property and enterprise infrastructure.
Hackread.com has reached out to Mistral AI for comment and will update this story if a response is received.

