Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Researchers warn Sitecore exploit chain linking cache addiction and remote code execution

Amazon abuses APT29 watering campaign Abuses Microsoft device code authentication

Billionaire Ambani taps Google and meta for building the Indian AI backbone

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Can your security stack see chatgpt? Why network visibility is important
Identity

Can your security stack see chatgpt? Why network visibility is important

userBy userAugust 29, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

August 29, 2025Hacker NewsEnterprise Security/Artificial Intelligence

Generation AI platforms such as ChatGpt, Gemini, Copilot, and Claude are becoming more and more common in organizations. These solutions improve overall task efficiency, but also present new data leak prevention against generative AI challenges. Sensitive information can be shared via chat prompts, files uploaded for AI-driven summaries, or via browser plugins that bypass familiar security controls. Standard DLP products often fail to register for these events.

Solutions such as FidelisNetwork® detection and response (NDR) introduce network-based data loss prevention that controls AI activity. This allows teams to monitor, enforce policies and audit their use of Genai as part of a broader data loss prevention strategy.

Why Genai’s Data Loss Prevention Must Evolve

Data loss prevention for generated AI requires shifting focus from endpoints and siloed channels to the visibility of the entire traffic path. Unlike previous tools that rely on scanning email and storage shares, NDR technologies like Fidelis identify threats when traversing a network and analyze traffic patterns even when content is encrypted.

The key concern is not just who created the data, but how and when and how to leave control of an organization through direct uploads, conversational queries, or AI capabilities integrated into the business system.

Effectively monitor the generation AI usage

Organizations can use the Genai DLP solution based on network discovery across three complementary approaches.

URL-based indicators and real-time alerts

Administrators can define indicators for a particular Genai platform. For example, ChatGpt. These rules can be applied to multiple services and can be tailored to the relevant department or user group. Monitoring can be done via web, email, or other sensors.

process:

When a user accesses a Genai endpoint, Fidelis NDR generates an alert if a DLP policy is triggered. The platform can record complete packet captures for subsequent analytics web and email sensors, and can automate actions such as redirecting user traffic and isolating suspicious messages.

advantage:

A rapid security response that enables real-time notifications supports comprehensive forensic analysis where necessary.

Considerations:

As AI endpoints and plugins change, it is necessary to maintain the latest rules.

Audit and Metadata Only Monitoring for Low Noise Environments

Not every organization needs immediate alerts for all genai activities. Network-based data loss prevention policies often record activity as metadata and create searchable audit trails that cause minimal confusion.

The alerts are suppressed and all associated session metadata holds sessions where the security teams of the session log log source and destination IP, protocol, port, device, and timestamp historically check all Genai interactions by host, group, or time frame.

advantage:

Reducing false positives and operational fatigue in SOC teams enables long-term trend analysis and audit or compliance reporting

limit:

Important events may be unnoticed if not reviewed regularly, and full packet capture is available only if certain alerts escalate

In practice, many organizations use this approach as a baseline, adding active monitoring only for high-risk departments and activities.

Detect and prevent dangerous file uploads

Uploading files to the Genai platform introduces higher risks, especially when processing PII, PHI, or your own data. Fidelis NDR can be monitored when such uploads occur. Effective AI security and data protection means closely examining these movements.

process:

The system recognizes that the file is uploaded to the genai endpoint DLP policy automatically inspects file content when rules match, captures the full context of the session even without user login, and device attributes provide accountability

advantage:

Detects and aborts an invalid data output event.

Considerations:

Monitoring works only for uploads that appear in managed network paths. The attribute is at the asset or device level, unless user authentication is present

Measuring your choices: What’s the best?

Real-time URL Alerts

Pros: Enables rapid intervention and forensic investigations, supporting the disadvantages of incident triage and autoresponders: potentially increasing noise and workload in high activity environments, and routine rules maintenance is required as endpoints evolve

Metadata only mode

Pros: With less strong operational overhead for audits and post-event reviews, security attention continues to focus on the cons of true anomalies. Not suitable for immediate threats.

File upload monitoring

Pros: Target real data exfoliation events and provide detailed records of compliance and forensic disadvantages: blind asset level mapping, off-network or unsupervised channels only when there is no login

Building comprehensive AI data protection

A comprehensive Genai DLP solution program includes:

Keep a live list of Genai endpoints and periodically assign monitoring rules updates monitoring modes, alerts, metadata, or both, and risk and business should work with compliance and privacy leaders.

Organizations should periodically check policy logs and update their systems to address new Genai services, plugins, and new AI-driven business use.

Best Practices for Implementation

A successful deployment requires:

Clear Platform inventory management and regular policy updates promote continuous monitoring and adaptation to AI technologies that evolve user education programs that promote responsible AI use using a risk-based monitoring approach tailored to organizational needs with existing SOC workflows and compliance frameworks

Key takeout

As Fidelis NDR shows, modern network-based data loss prevention solutions help businesses balance strong AI security with the adoption of data protection and generation AI. Combining alert-based, metadata and file upload controls, organizations create a flexible monitoring environment where productivity and compliance coexist. Security teams retain the context and reach they need to handle new AI risks, and users continue to benefit from the value of Genai technology.

Did you find this article interesting? This article is a donation from one of our precious partners. Follow us on Google News, Twitter and LinkedIn to read exclusive content you post.

Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWhen you click on the Studios patch, password state authentication bypass vulnerability on emergency access page
Next Article £30 million Darlington Biofoundry to promote the next generation of British RNA therapies
user
  • Website

Related Posts

Researchers warn Sitecore exploit chain linking cache addiction and remote code execution

August 29, 2025

Amazon abuses APT29 watering campaign Abuses Microsoft device code authentication

August 29, 2025

Abandoned Sogou Zhuyin Update Server Hijacking, weaponized campaign in the Thai One Spy Campaign

August 29, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Researchers warn Sitecore exploit chain linking cache addiction and remote code execution

Amazon abuses APT29 watering campaign Abuses Microsoft device code authentication

Billionaire Ambani taps Google and meta for building the Indian AI backbone

Abandoned Sogou Zhuyin Update Server Hijacking, weaponized campaign in the Thai One Spy Campaign

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Unlocking Tomorrow’s Health: Medical Device Integration

Web 3.0’s Promise: What Sir Tim Berners-Lee Envisions for the Future of the Internet

TwinH’s Paves Way at Break The Gap 2025

Smarter Healthcare Starts Now: The Power of Integrated Medical Devices

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.