
When the generation AI tools became widely available in the second half of 2022, it was not just the engineers who paid attention. Employees in all industries quickly realized the potential of generator AI to increase productivity, streamline communication and accelerate work. Like so many waves of consumer-first IT innovation, including file sharing, cloud storage, collaboration platforms, AI has landed in businesses through the hands of employees rather than official channels.
Many organizations responded with urgency and force in the face of the risk that sensitive data would be delivered to public AI interfaces. They blocked access. While you can understand it as your first defense, blocking public AI apps is not a long-term strategy. This is a stop. And in most cases it’s not even effective.
Shadow AI: Invisible risks
The Zscaler ThreatLabz team tracks AI and machine learning (ML) traffic across the enterprise, with numbers telling compelling stories. In 2024 alone, ThreatLabz analyzed 36 times more AI and ML traffic than the previous year, identifying it using over 800 AI applications.
Blocking does not prevent employees from using AI. They email the files to their personal accounts, use their phone or home device, capture screenshots and enter them into the AI system. These workarounds move sensitive interactions into the shadows from an enterprise monitoring and protection perspective. result? Growing blind spots are known as Shadow AI.
Blocking an unapproved AI app can sometimes seem to reduce usage to zero in dashboard reports, but in reality, your organization is not protected. It blinds what is actually happening.
Lessons learned from SaaS adoption
We were here before. When early software as a service tool came into existence, the team scrambled to control the unauthorized use of cloud-based file storage applications. The answer was not to ban file sharing. Rather, it was to offer a secure, seamless single sign-on alternative that aligns employees’ expectations for convenience, ease of use and speed.
However, this time it’s even higher around the bets. With SaaS, data leakage often means misguided files. Using AI can mean that once that data is gone, there is no way to delete or retrieve it, and incorrectly trains public models of intellectual property. There is no “Undo” button in memory for large language models.
Visibility first, then policy
Before organizations can intelligently manage their AI usage, they need to understand what is actually happening. Blocking traffic without vision is like building a fence without knowing where the property line is.
We have solved this problem before. Zscaler’s location in traffic flow gives it an unparalleled vantage point. You can see which apps are accessed, who is accessed, and how often. This real-time visibility is essential to assess risk, shape policies and enable smarter and safer AI adoption.
Next, we have evolved ways to deal with policies. Many providers simply offer black and white options for “permission” or “block”. A better approach is context-responsive, policy-driven governance that is consistent with zero trust principles that do not assume implicit trust and require continuous contextual evaluation. Not all use of AI presents the same level of risk, and policies must reflect that.
For example, users can be careful to provide access to AI applications, or allow transactions only in browser equation mode. This means that users cannot potentially paste sensitive data into the app. Another approach that works well is to redirect users to an alternative app managed on-premises, enterprise-approved app. This allows employees to enjoy productivity benefits without risking data exposure. If users have a secure, fast and authorized way of using AI, they don’t have to go around you.
Finally, Zscaler’s data protection tools mean that employees can make use of certain public AI apps, but can prevent them from accidentally sending sensitive information. Our research shows over 4 million data loss prevention (DLP) violations in the Zscaler Cloud, showing cases where sensitive enterprise data, such as financial data, personally identifiable information, source code, and medical data, are sent to AI applications and transactions are blocked by Zscaler policies. These AI apps had actual data loss without Zscaler’s DLP enforcement.
Balance of enablement and protection
This is not to stop adoption of AI, but to mold it responsibly. Security and productivity don’t have to be at odds. With the right tools and mindset, organizations can both empower users and secure their data.
For more information, please visit zscaler.com/security
Source link