
Recent analysis of enterprise data suggests that generation AI tools developed in China are often widely used by US and UK employees without security team monitoring or approval. The study, conducted by Harmonic Security, also identifies hundreds of instances where sensitive data has been uploaded to a platform hosted in China, raising concerns about compliance, data residency, and commercial confidentiality.
Over the course of 30 days, Harmonic looked at the activities of a sample of 14,000 employees from various companies. It has been found that they use Chinese-based Genai tools such as Deepseek, Kimi Moonshot, Baidu Chat, Qwen (from Alibaba), and Manus. These applications are powerful and easily accessible, but typically provide little information on how to process, store, or reuse uploaded data.
The findings highlight the growing gap between AI adoption and governance, particularly in organizations with many developers whose output is often superior to policy compliance.
If you are looking for a way to enforce your AI usage policy with granular control, contact Harmonic Security.
Large data leaks
In total, over 17 megabytes of content have been uploaded to these platforms by 1,059 users. Harmonic identified 535 individual incidents containing sensitive information. Almost a third of that material consisted of source code or engineering documents. The remainder included documents relating to mergers and acquisitions, financial reports, personally identifiable information, legal contracts and customer records.
Harmonic studies have picked Deepseek as the most common tool associated with 85% of recorded incidents. Kimi Moonshot and Qwen are also taking it. Collectively, these services are restructuring how Genai appears within the corporate network. Not through authorized platforms, but through quiet, user-driven recruitment.
China’s genai services operate frequently under tolerant or opaque data policies. In some cases, platform terminology allows the uploaded content to be used further for model training. This meaning is largely relevant to companies operating in regulated sectors and handling their own software and internal business plans.
Policy enforcement through technical management
Harmonic Security has developed a tool to help businesses control how Genai is used in the workplace. The platform monitors AI activity in real time and enforces policies during use.
Companies have detailed control to block access to specific applications based on HQ location, restrict uploading certain types of data, and educate users through context prompts.

Governance as a strategic order
The increase in unauthorized use of Genai within the enterprise is no longer a hypothesis. Harmonic data shows that nearly one in 12 employees already interact with China’s genai platform, but in many cases they are unaware of data retention risks or exposure to jurisdictions.
The findings suggest that awareness alone is not sufficient. Companies need proactive and forced management to enable genai adoption without compromising compliance or security. As technology matures, the ability to manage its use may prove to be just as important as the performance of the model itself.
Harmonics allow you to embrace the benefits of genai without putting your business at unnecessary risk.
Learn more about how Harmonic enforces AI policies and helps you protect sensitive data with Harmonic.security.
Source link