
For years, security leaders have treated artificial intelligence as an “emerging” technology. The new Enterprise AI and SaaS Data Security Report from AI & Browser Security Company Rayerx proves how obsolete the way that thinking has become. Far from future concerns, AI is already the largest uncontrolled channel for corporate data removal, rather than Shadow Saas or unmanaged file sharing.
Findings drawn from real-world enterprise browsing telemetry reveal counterintuitive truths. AI issues in companies are not unknown to tomorrow, but today’s daily workflow. Sensitive data is already flowing to ChatGpt, Claude, and Copilot at incredible rates, primarily through unmanaged accounts and invisible copy/paste channels. Traditional DLP tools are built for authorized file-based environments, but are not even heading in the right direction.
From “appearance” to essential at record times
In just two years, AI tools reached adoption levels and took decades to achieve them over decades. One in two Enterprise employees (45%) already use the generator AI tool, and ChatGpt alone has hit 43% penetration. Compared to other SAAS tools, AI accounts for 11% of all enterprise application activity, comparable to file sharing and office productivity apps.
twist? This explosive growth does not involve governance. Instead, the majority of AI sessions take place outside of enterprise control. 67% of AI usage occurs through uncontrolled personal accounts, and CISOS blinds who uses what and which data flows where.

Sensitive data is everywhere, and it works the wrong way
Perhaps the most surprising and surprising discovery is that AI platforms already have sensitive data flowing. 40% of files uploaded to the Genai tool contain PII or PCI data, and employees use personal accounts for nearly 10 of those uploads.
More revealing: files are just part of the problem. The actual leakage channel is copy/paste. 77% of employees paste their data into the Genai tool, and 82% of that activity comes from unmanaged accounts. On average, employees perform 14 pastes per day via personal accounts, containing at least 3 sensitive data.

This will result in copy/paste in Genai and #1 vector of company data leaving the company data. It’s not just a technical blind spot. It’s cultural. Designed to scan attachments and block unauthorized uploads, security programs completely miss out on the fastest growing threats.
Identity Mirage: Corporate ≠ Secure
Security leaders often assume that “company” accounts are on par with ensuring access. The data proves that it is not the case. Even if employees use corporate qualifications for high-risk platforms such as CRM and ERP, 71% of SSO:CRM and 83% of ERP logins are bypassed by a massive basis.
This makes corporate logins functionally indistinguishable from personal logins. Regardless of whether the employee signs Salesforce with a Gmail address or with a password-based corporate account, the results are the same. No federation, no visibility, no control.

Instant messaging dead angle
AI is the fastest growing channel for data leaks, while instant messaging is the quietest. 87% of Enterprise Chat usage occurs through unmanaged accounts, with 62% of users pasting PII/PCI. The convergence of Shadow AI and Shadow Chat creates dual blind spots where sensitive data leaks into constantly unsurveillanced environments.
Together, these findings draw harsh pictures. Security teams are focusing on the wrong battlefield. Wars for data security are not included in file servers or authorized SaaS. It is in a browser where employees blend personal and corporate accounts and move between authorized and shadow tools, and both are sensitively moved.
Rethinking enterprise security in the age of AI
The recommendations in the report are clear and unconventional.
We treat AI security as a core enterprise category. Governance strategies require AI to be on par with email and file sharing, as well as monitoring upload, prompt and copy/paste flows. Shifting from file-centered to action-centered DLP. Data leaves the enterprise via fileless methods such as uploading files, as well as copy/paste, chat, and quick injection. Policy must reflect that reality. Restrict unmanaged accounts and enforce federalism everywhere. Personal accounts and non-extended logins are functionally the same: invisible. Restrict their use – whether to block them completely or apply strict context-aware data control policies is the only way to restore visibility. Prioritizing high-risk categories: AI, chat, file storage. Not all SaaS apps are equal. These categories require the most stringent controls as they are both hypersucking and sensitive.
The bottom line of CISOS
The surprising truth revealed by the data is this: AI is not just a productivity revolution, it is a breakdown of governance. The tools employees love most are the least controlled, and the gap between adoption and surveillance is growing every day.
For security leaders, that meaning is urgent. Waiting for AI to be treated as an “emergence” is no longer an option. It is already embedded in the workflow, carrying sensitive data, and is already serving as the main vector of corporate data loss.
The boundaries of the enterprise have shifted again, and this time it has become a browser. If CISOs do not adapt, AI not only shapes the future of work, but also determines the future of data breach.
Layerx’s new research report provides the full scope of these findings, providing CISOs and security teams with unprecedented visibility into the way AI and SaaS are actually used within the enterprise. Using real-world browser telemetry, the report details where sensitive data is leaking, where blind spots take the greatest risk, and the practical steps that leaders can take to ensure an AI-driven workflow. For organizations looking to understand their true exposure and how to protect themselves, this report provides the clarity and guidance you need to act confidently.
Source link