DeepSeek’s sudden fame this week has come with a downside, as security and AI researchers have wasted no time probing for flaws in the AI model and its security.
Claims that DeepSeek can be easily jailbroken appeared within hours of the AI startup’s rise to the center of the AI world, followed by reports of misinformation and inaccuracies found in the would-be rival to ChatGPT and other large language models (LLMs). Scammers wasted no time piling on, as Cyble detected a surge in fraud and phishing attempts aimed at exploiting DeepSeek’s sudden popularity.
The latest DeepSeek security issue involves an exposed database discovered by Wiz Research, which added to concerns about the AI startup’s security and privacy controls.
“The rapid adoption of AI services without corresponding security is inherently risky,” the Wiz researchers wrote. “This exposure underscores the fact that the immediate security risks for AI applications stem from the infrastructure and tools supporting them.”
One downside to the security and misinformation issues surrounding DeepSeek is they threaten to detract from what appears to be a genuine breakthrough in efficiency that has attracted the attention of tech luminaries like Snowflake CEO Sridhar Ramaswamy.
Database Leak Underscores DeepSeek Security Concerns
The Wiz researchers said they discovered a publicly accessible ClickHouse database belonging to DeepSeek that allowed full control over database operations, including the ability to access internal data.
The exposure includes more than “a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information,” the researchers wrote. They immediately disclosed the issue to DeepSeek, which promptly secured the database.
The researchers said they began investigating DeepSeek’s security posture for any vulnerabilities following the AI startup’s sudden fame. It didn’t take long to find significant issues.
“Within minutes, we found a publicly accessible ClickHouse database linked to DeepSeek, completely open and unauthenticated, exposing sensitive data,” they said.
The unsecured instance allowed for “full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world,” the researchers added.
The data appeared to be recent, with logs dating from January 6, 2025. It included references to internal DeepSeek API endpoints and exposed plaintext logs that included chat history, API keys, backend details, and operational metadata.
“This level of access posed a critical risk to DeepSeek’s own security and for its end-users,” the researchers said. “Not only an attacker could retrieve sensitive logs and actual plain-text chat messages, but they could also potentially exfiltrate plaintext passwords and local files along propriety information directly from the server.”
An AI Breakthrough Clouded By Security and Misinformation Issues
An unfortunate side effect of the widespread focus on DeepSeek’s security and accuracy issues is that the controversy threatens to obscure the fact that DeepSeek may well be the cost and efficiency breakthrough that the company claims to be.
In a market full of hugely expensive, energy-inefficient GenAI models, a model that can compete while using 90% to 98% less power is very good news indeed. And DeepSeek has even open-sourced one of its models, giving others a chance to work with it.
It remains to be seen whether DeepSeek’s security and misinformation issues could limit its adoption, but the window for getting it right may not be open long, as rivals like Alibaba are quickly following with their own claims of GenAI breakthroughs.
And perhaps there’s a lesson here for other startups, whether they’re focused on AI or other technologies: Don’t let cybersecurity issues detract from your biggest breakthroughs.
Related