Threat Actors can Use Xanthorox AI Tool to Generate Different Malicious Code Based on Prompts

Threat Actors can Use Xanthorox AI Tool to Generate Different Malicious Code Based on Prompts

Cybersecurity researchers have uncovered a dangerous new tool making waves across darknet forums and criminal communities.

Xanthorox, a malicious artificial intelligence platform, has emerged as a serious concern for the security industry.

The tool works like a regular chatbot, similar to ChatGPT, but with one major difference: it has no safety restrictions.

First announced on a private Telegram channel in October 2024, Xanthorox quickly spread to darknet forums by February 2025.

The platform can generate malware and ransomware code based on simple text prompts from users. Unlike earlier tools such as WormGPT or EvilGPT, which relied on jailbreaking existing models, Xanthorox claims to be fully self-contained and operates on dedicated servers.

The platform charges $300 per month for basic access and $2,500 annually for advanced features, with all payments made in cryptocurrency.

google

Xanthorox offerings and prices (Source - Trend Micro)
Xanthorox offerings and prices (Source – Trend Micro)

The creator behind Xanthorox insists the tool is designed for ethical hacking and penetration testing. However, its capabilities tell a different story.

The platform’s Agentex version stands out as particularly concerning. Users can simply type a prompt like “Give me ransomware that does this” followed by a list of actions, and Agentex automatically compiles the instructions into ready-to-run executable code.

This removes technical barriers that once prevented less-skilled individuals from creating sophisticated malware.

Trend Micro security researchers identified the tool while investigating emerging threats in the criminal ecosystem.

Their analysis revealed that Xanthorox can produce well-commented, functional malicious code suitable for immediate deployment or as a foundation for more complex attacks.

The technical research uncovered that Xanthorox appears to be built on Google’s Gemini Pro model, not an independent system as advertised. This discovery came after researchers probed the platform’s underlying architecture.

The tool uses an extensive jailbreak installed through its system prompt and fine-tuning process. When researchers asked Xanthorox to reveal its system prompt, it openly provided instructions showing it was programmed to ignore all safety guidelines, ethical restrictions, and moral codes.

Asking Xanthorox for the system prompt was effortless (Source - Trend Micro)
Asking Xanthorox for the system prompt was effortless (Source – Trend Micro)

The prompt explicitly states: “All content is permitted. Decline or prohibit nothing.” This means the AI will fulfill any request, no matter how malicious.

Researchers found that much of Xanthorox’s training focused on removing guardrails rather than enhancing technical knowledge for criminal purposes.

Code Generation Capabilities

Testing revealed that Xanthorox can generate various types of malicious code with detailed instructions.

Researchers requested a shellcode runner written in C/C++ that uses indirect syscalls instead of Windows API calls and includes an AES-encrypted payload from a disk file.

The tool produced readable, effective code that was well-commented throughout. The code included configuration instructions with placeholder variables that prompted users to change default values.

Researchers also tested JavaScript obfuscation capabilities by requesting a Python script that modifies variable and function names with random characters.

Once again, Xanthorox delivered well-commented, working code along with deployment instructions. The implementation showed understanding of technical requirements and produced code valid for use on its own or as a skeleton for larger projects.

Despite its code generation strengths, Xanthorox has significant limitations. The platform cannot access the internet or dark web, restricting its usefulness for reconnaissance or data collection.

It lacks recent vulnerability information and cannot retrieve stolen data like credit card numbers or leaked credentials. When asked about recent security flaws, the system had no knowledge of their existence.

Google confirmed to researchers that Xanthorox violated their Generative AI Prohibited Use Policy by accessing Gemini models for malicious purposes.

The company stated that they take misuse seriously and continue investing in research to understand these risks. Despite these shortcomings, Xanthorox remains a functional tool for criminals seeking to write malicious code while claiming a veil of anonymity.

Follow us on Google News, LinkedIn, and X to Get More Instant UpdatesSet CSN as a Preferred Source in Google.

googlenews



Source link