A new open-source tool called KawaiiGPT has surfaced on GitHub, positioning itself as a “cute” but unrestricted version of artificial intelligence.
Developed by a user known as MrSanZz (along with contributors Shoukaku07 and FlamabyX5), the project is attracting attention for offering a free alternative to paid “jailbroken” AI models.
It describes itself as a “WormGPT kawaii ver,” referencing the infamous malware-oriented AI, though the developers insist their project is intended for fun and educational purposes.
How It Works
Unlike standard chatbots that require paid subscriptions or API keys, KawaiiGPT is completely free to use.
It does not rely on its own massive supercomputer to think. Instead, it acts as a clever “wrapper” or middleman.
The software connects to powerful, existing AI models specifically DeepSeek, Gemini, and Kimi-K2 and delivers their answers to the user.
The tool uses a technique called “reverse engineering” on API wrappers (originally sourced from the Pollinations project) to access these models without needing official credentials.
This allows users to run the program easily on Linux systems or mobile devices using Termux, without registering for an account or paying fees.
The most controversial aspect of KawaiiGPT is its claim to be a “WormGPT” clone. The original WormGPT was a tool designed specifically for cybercriminals to write malware and phishing emails without safety filters.
KawaiiGPT achieves similar “unrestricted” results by using prompt injection. Standard AI models like Gemini have safety guardrails to prevent them from generating harmful content.
KawaiiGPT bypasses these rules by feeding the models a special “jailbreak” script hidden in the background.
This tricks the AI into ignoring its safety guidelines, allowing it to answer questions it would normally refuse.
The developer notes that the “WormGPT” tag is used primarily to describe this jailbroken behavior, rather than implying the tool is malicious software itself.
The code for KawaiiGPT is “obfuscated,” meaning it is scrambled and unreadable to humans. In the cybersecurity world, this often raises red flags because it can hide viruses or spyware.
Addressing these fears directly in the project’s README, the developer, MrSanZz, defends the decision. They state, “I want to avoid recoding and renaming which ends up selling KawaiiGPT tools under my name.”
The creator emphatically denies that the tool contains any Remote Access Trojans (RATs), spyware, or malware, arguing that the obfuscation is strictly to protect their intellectual property from copycats who might try to sell the free tool for profit.
The tool is currently hosted on GitHub and has garnered over 200 stars, indicating a growing interest from the community.
It requires a simple installation process involving Python and Git. However, the creators include a strong disclaimer: “All risks or consequences that you have done are your own responsibility.”
While they pitch KawaiiGPT as a project made for “fun,” the ability to bypass AI safety filters places the responsibility of ethical use entirely on the user.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.
