Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
What's Hot

“Bitcoin Family” changed its security after the recent cryptocurrency

AB will be released at Binance -Tech Startups

After data is wiped out, Kiranapro co-founders cannot rule out external hacks

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
Fyself News
Home » Empower users and protect against Genai data loss
Identity

Empower users and protect against Genai data loss

userBy userJune 6, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

June 6, 2025Hacker NewsArtificial Intelligence/Zero Trust

genai data loss

When the generation AI tools became widely available in the second half of 2022, it was not just the engineers who paid attention. Employees in all industries quickly realized the potential of generator AI to increase productivity, streamline communication and accelerate work. Like so many waves of consumer-first IT innovation, including file sharing, cloud storage, collaboration platforms, AI has landed in businesses through the hands of employees rather than official channels.

Many organizations responded with urgency and force in the face of the risk that sensitive data would be delivered to public AI interfaces. They blocked access. While you can understand it as your first defense, blocking public AI apps is not a long-term strategy. This is a stop. And in most cases it’s not even effective.

Shadow AI: Invisible risks

The Zscaler ThreatLabz team tracks AI and machine learning (ML) traffic across the enterprise, with numbers telling compelling stories. In 2024 alone, ThreatLabz analyzed 36 times more AI and ML traffic than the previous year, identifying it using over 800 AI applications.

Blocking does not prevent employees from using AI. They email the files to their personal accounts, use their phone or home device, capture screenshots and enter them into the AI ​​system. These workarounds move sensitive interactions into the shadows from an enterprise monitoring and protection perspective. result? Growing blind spots are known as Shadow AI.

Blocking an unapproved AI app can sometimes seem to reduce usage to zero in dashboard reports, but in reality, your organization is not protected. It blinds what is actually happening.

Lessons learned from SaaS adoption

We were here before. When early software as a service tool came into existence, the team scrambled to control the unauthorized use of cloud-based file storage applications. The answer was not to ban file sharing. Rather, it was to offer a secure, seamless single sign-on alternative that aligns employees’ expectations for convenience, ease of use and speed.

However, this time it’s even higher around the bets. With SaaS, data leakage often means misguided files. Using AI can mean that once that data is gone, there is no way to delete or retrieve it, and incorrectly trains public models of intellectual property. There is no “Undo” button in memory for large language models.

Visibility first, then policy

Before organizations can intelligently manage their AI usage, they need to understand what is actually happening. Blocking traffic without vision is like building a fence without knowing where the property line is.

We have solved this problem before. Zscaler’s location in traffic flow gives it an unparalleled vantage point. You can see which apps are accessed, who is accessed, and how often. This real-time visibility is essential to assess risk, shape policies and enable smarter and safer AI adoption.

Next, we have evolved ways to deal with policies. Many providers simply offer black and white options for “permission” or “block”. A better approach is context-responsive, policy-driven governance that is consistent with zero trust principles that do not assume implicit trust and require continuous contextual evaluation. Not all use of AI presents the same level of risk, and policies must reflect that.

For example, users can be careful to provide access to AI applications, or allow transactions only in browser equation mode. This means that users cannot potentially paste sensitive data into the app. Another approach that works well is to redirect users to an alternative app managed on-premises, enterprise-approved app. This allows employees to enjoy productivity benefits without risking data exposure. If users have a secure, fast and authorized way of using AI, they don’t have to go around you.

Finally, Zscaler’s data protection tools mean that employees can make use of certain public AI apps, but can prevent them from accidentally sending sensitive information. Our research shows over 4 million data loss prevention (DLP) violations in the Zscaler Cloud, showing cases where sensitive enterprise data, such as financial data, personally identifiable information, source code, and medical data, are sent to AI applications and transactions are blocked by Zscaler policies. These AI apps had actual data loss without Zscaler’s DLP enforcement.

Balance of enablement and protection

This is not to stop adoption of AI, but to mold it responsibly. Security and productivity don’t have to be at odds. With the right tools and mindset, organizations can both empower users and secure their data.

For more information, please visit zscaler.com/security

Did you find this article interesting? This article is a donation from one of our precious partners. Follow us on Twitter and LinkedIn to read exclusive content you post.

Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMicrosoft will help CBI to dismantle the Indian call centre behind Japan’s technical assistance scam
Next Article Omada Health is now available: Virtual Care Startup joins IPO Wave, paying $150 million, $1.1 billion valuation of NASDAQ debut
user
  • Website

Related Posts

The new Atomic Macos Stealer campaign targets Apple users by exploiting Clickfix

June 6, 2025

Microsoft will help CBI to dismantle the Indian call centre behind Japan’s technical assistance scam

June 6, 2025

Why More Security Leaders Choose AEVs

June 6, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

“Bitcoin Family” changed its security after the recent cryptocurrency

AB will be released at Binance -Tech Startups

After data is wiped out, Kiranapro co-founders cannot rule out external hacks

Top 10 Startups and Tech Funding News for the Weekly Ends June 6, 2025

Trending Posts

Sana Yousaf, who was the Pakistani Tiktok star shot by gunmen? |Crime News

June 4, 2025

Trump says it’s difficult to make a deal with China’s xi’ amid trade disputes | Donald Trump News

June 4, 2025

Iraq’s Jewish Community Saves Forgotten Shrine Religious News

June 4, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

AB will be released at Binance -Tech Startups

Top 10 Startups and Tech Funding News for the Weekly Ends June 6, 2025

Order openai to keep all chatgpt logs including deleted temporary chats, API requests

Omada Health is now available: Virtual Care Startup joins IPO Wave, paying $150 million, $1.1 billion valuation of NASDAQ debut

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.