Are your employees using Chinese GenAI tools at work?
Nearly one in 12 employees are using Chinese-developed generative AI tools at work, and they’re exposing sensitive data in the process.
That’s according to new research from Harmonic Security, which analyzed the behavior of roughly 14,000 end users in the U.S. and U.K. over a 30-day period. The report found that 7.95% of users accessed at least one Chinese GenAI application during that time.
Of the 1,059 users who interacted with these tools, Harmonic identified 535 incidents of sensitive data exposure. Most of those involved DeepSeek, which accounted for about 85% of the incidents, followed by Moonshot Kimi, Qwen, Baidu Chat, and Manus.
In terms of what sensitive data was exposed, code and development artifacts represented the largest category, making up 32.8% of the total. This included proprietary code, access keys, and internal logic. This was followed by mergers & acquisitions data (18.2%), personally identifiable information (PII) (17.8%), financial information (14.4%), customer data (12.0%), and legal documents (4.9%).
Engineering-heavy organizations were found to be particularly exposed, as developers increasingly turn to GenAI for coding assistance, potentially without realizing the implications of submitting internal source code, API keys, or system architecture into foreign-hosted models.
“All data submitted to these platforms should be considered property of the Chinese Communist Party given a total lack of transparency around data retention, input reuse, and model training policies, exposing organizations to potentially serious legal and compliance liabilities. But these apps are extremely powerful with many outperforming their US counterparts, depending on the task. This is why employees will continue to use them but they’re effectively blind spots for most enterprise security teams,” said Alastair Paterson, CEO of Harmonic Security.
“Blocking alone is rarely effective and often misaligned with business priorities. Even in companies willing to take a hardline stance, users frequently circumvent controls. A more effective approach is to focus on education and train employees on the risks of using unsanctioned GenAI tools, especially Chinese-hosted platforms. We also recommend providing alternatives via approved GenAI tools that meet developer and business needs. Finally, enforce policies that prevent sensitive data, particularly source code, from being uploaded to unauthorized apps. Organizations that avoid blanket blocking and instead implement light-touch guardrails and nudges see up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300%,” Paterson concluded.
Source link