China-Linked AI Pentest Tool ‘Villager’ Raises Concern After 10K Downloads

China-Linked AI Pentest Tool 'Villager' Raises Concern After 10K Downloads

China-linked AI tool Villager, published on PyPI, automates cyberattacks and has got experts worried after 10,000 downloads in just two months.

A new penetration testing tool called Villager, released on the Python Package Index (PyPI) by a former Chinese capture-the-flag (CTF) competitor, is now catching interest from security researchers. While marketed as a red teaming tool, experts warn that its automation capabilities and open availability may allow threat actors to use it maliciously.

According to cybersecurity firm Straiker, which first spotted the tool, Villager was published as a public Python package in late July 2025 by a user named stupidfish001, linked to the Chinese group HSCSEC, and now connected with a company known as Cyberspike. In the two months since its release, Villager has been downloaded more than 10,000 times across Linux, macOS and Windows environments.

According to researchers from Straiker, the pattern looks a lot like what happened with Cobalt Strike, a legitimate red teaming solution that was repurposed by cybercriminals and nation-state groups.

Generative AI Features

However, Villager takes this a step further by adding generative AI to the process, allowing attackers to automate reconnaissance, vulnerability exploitation and follow-on tasks through natural language commands.

Straiker’s long technical research details that Cyberspike, the group behind Villager, appears to operate under the name Changchun Anshanyuan Technology Co., Ltd., registered in China as an AI development company. But the lack of an official website and the presence of remote administration features resembling known malware families like AsyncRAT raise questions about the company’s true intentions.

Cyberspike’s past products also raise red flags. Analysis of its earlier “Cyberspike Studio” tool revealed it was a modified suite based on AsyncRAT, featuring capabilities like remote desktop access, keylogging, webcam hijacking and Discord token theft. Those same components now appear to be part of Villager’s backend, repackaged with a cleaner interface and AI orchestration.

Dashboard image captured by Straiker

Researchers further added that Villager is an “AI-orchestrated” modular framework that integrates multiple components, including containerised Kali Linux environments, browser automation, code execution and a custom AI model dubbed al-1s-20250421.

It allows users to submit high-level objectives such as “scan and exploit example.com” using plain text, with the AI breaking that request down into a series of technical steps, carrying them out autonomously.

Another concerning feature is its built-in forensic evasion. The framework automatically creates temporary containers, each configured to self-destruct within 24 hours, leaving minimal traces. It also uses randomised SSH ports and task planning to avoid detection and complicate analysis.

DeepSeek Integration

Straiker’s research notes that Villager leverages DeepSeek models and LangChain integrations to support decision-making and exploit generation. A testing script included in the package connects to Cyberspike’s own infrastructure, which appears to host these models behind an OpenAI-compatible API endpoint.

Logs show Villager is being actively downloaded at a steady rate of over 200 times every three days. It is designed to run in real attack workflows, with Docker images hosted on Cyberspike’s private GitLab repository and MCP (Model Context Protocol) clients coordinating operations through FastAPI endpoints.

China-Linked AI Pentest Tool 'Villager' Raises Concern After 10K Downloads
Villager download stats (Image via Straiker)

Casey Ellis, founder of Bugcrowd, notes that the use of AI by attackers is nothing new. However, the arrival of a Chinese-developed tool like Villager puts a sharper edge on the issue.

“Hackers, both helpful and malicious, have been using AI to improve their effectiveness ever since generative AI became generally available,” Ellis said. “The important takeaway here is that AI-assisted offence is here, has been here for quite some time now, and is here to stay. The availability of increasingly powerful capabilities to a far broader audience is the real concern.”




Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.