Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Two days after OpenAI’s Atlas, Microsoft reboots nearly identical AI browser

Meet Your Digital Twin: Europe’s Cutting-Edge AI is Personalizing Medicine

TwinH: The AI Game-Changer for Faster, More Accessible Legal Services

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Securing AI to Profit from AI
Identity

Securing AI to Profit from AI

userBy userOctober 21, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Artificial intelligence (AI) holds great promise in improving cyber defenses and making security personnel’s lives easier. This helps teams eliminate alert fatigue, identify patterns faster, and achieve levels of scale that human analysts alone cannot match. But realizing that potential depends on protecting the systems that make it possible.

All organizations experimenting with AI in their security operations are expanding their attack surface, whether consciously or not. Without clear governance, strong identity management, and visibility into how AI makes decisions, even well-intentioned deployments can create risks faster than they can be mitigated. To truly benefit from AI, defenders must approach protecting it with the same rigor they apply to other critical systems. This means establishing trust in the data that comes from learning, establishing accountability for the actions the data performs, and monitoring the results it produces. If properly secured, AI can augment rather than replace human capabilities, enabling practitioners to work smarter, respond faster, and defend more effectively.

Establishing trust in Agentic AI systems

As organizations begin to integrate AI into their defense workflows, identity security becomes the foundation of trust. Every model, script, or autonomous agent running in production represents a new identity that can access data, issue commands, and influence defense outcomes. If these identities are not properly managed, tools meant to increase security can quietly become a source of risk.

The advent of Agentic AI systems makes this especially important. These systems do more than just analyze. They may act without human intervention. Triage alerts, enrich context, or trigger response playbooks under delegated authority from human operators. Each action is effectively a transaction of trust. That trust must be bound to identity, authenticated through policy, and auditable end-to-end.

The same principles that protect people and services should be applied to AI agents.

Scoped credentials and least privilege to ensure that every model or agent has access to only the data and functionality needed for its task. Strong authentication and key rotation prevent impersonation and credential leaks. Activity provenance and audit logs allow you to track, verify, and optionally revert all actions initiated by AI. Segmentation and isolation prevents access between agents, ensuring that one compromised process cannot affect other processes.

In practice, this means treating all agent AI systems as first-class identities within the IAM framework. Like users and service accounts, each requires a defined owner, lifecycle policy, and monitoring scope. Capabilities often change faster than designed, so defense teams must continually validate what agents are capable of, not just what they intend. Once a foundation of identity is established, defenders can turn their attention to broader system security.

Securing AI: Best practices for success

Securing AI starts with securing the systems that make it possible: the models, data pipelines, and integrations that are built into day-to-day security operations. As is

To protect networks and endpoints, AI systems must be treated as mission-critical infrastructure that requires layered and continuous defense.

The SANS Secure AI Blueprint outlines a Protect AI track that provides a clear starting point. Built on the SANS Critical AI Security Guidelines, this blueprint defines six control domains that directly translate into practice.

Access control: Enforce minimal privileges and strong authentication for all models, datasets, and APIs. We continually record and review access to prevent unauthorized use. Data control: Validate, sanitize, and classify all data used for training, augmentation, or inference. Secure storage and lineage tracking reduces the risk of model poisoning and data leakage. Deployment strategy: Harden your AI pipeline and environment with sandboxing, CI/CD gating, and red teaming before release. Treat deployments as controlled, auditable events rather than experiments. Inference security: Protect your models from prompted injections and misuse by enforcing input/output validation, guardrails, and escalation paths for high-impact actions. Monitoring: Continuously observe model behavior and output for drift, anomalies, and signs of compromise. Effective telemetry allows defenders to detect operations before they spread. Model security: Check model versions, signatures, and integrity throughout the lifecycle to ensure reliability and prevent unauthorized swaps and retraining.

These controls are directly aligned with NIST’s AI Risk Management Framework and the OWASP Top 10 for LLMs. These highlight the most common and critical vulnerabilities in AI systems, from prompt injection and insecure plugin integration to model poisoning and data leakage. Applying mitigations from the frameworks within these six domains can translate guidance into operational defenses. With these foundations in place, your team can focus on using AI responsibly by knowing when to trust automation and when to keep humans involved.

Balance expansion and automation

AI systems can assist human practitioners like interns who never sleep. However, it’s important for security teams to differentiate between what to automate and what to harden. Some tasks benefit from full automation, especially those that are reproducible, measurable, and have low risk in case of errors. However, some require direct human oversight because context, intuition, or ethics are more important than speed.

Threat hardening, log parsing, and alert deduplication are prime candidates for automation. These are data-intensive, pattern-driven processes that favor consistency over creativity. In contrast, incident scoping, attribution, and response decisions rely on context that AI cannot fully understand. This is where AI needs to help by uncovering metrics, suggesting next steps, and summarizing findings, while practitioners retain decision-making authority.

Finding that balance requires process design maturity. Security teams need to categorize workflows based on margin for error and cost of automation failures. If the risk of false positives or missing nuances is high, keep humans in the loop. If you can objectively measure accuracy, let AI accelerate your work.

Join us at SANS Surge 2026!

We’ll delve deeper into this topic in our keynote at SANS Surge 2026 (February 23-28, 2026), exploring how security teams can ensure their AI systems can safely rely on them. If your organization is rapidly advancing AI adoption, this event can help you transition more securely. Join us to connect with peers, learn from experts, and see what secure AI really looks like.

Register for SANS Surge 2026 here.

Note: This article was contributed by Frank Kim, SANS Institute Fellow.

Was this article interesting? This article is a contribution from one of our valued partners. Follow us on Google News, Twitter, and LinkedIn to read more exclusive content from us.

Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSpiro raises $100 million, largest ever investment in African e-mobility
Next Article Accelerate material innovation with AI
user
  • Website

Related Posts

Meet Your Digital Twin: Europe’s Cutting-Edge AI is Personalizing Medicine

October 23, 2025

TwinH: The AI Game-Changer for Faster, More Accessible Legal Services

October 23, 2025

North Korean hacker lures defense engineer with fake job to steal drone secrets

October 23, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Two days after OpenAI’s Atlas, Microsoft reboots nearly identical AI browser

Meet Your Digital Twin: Europe’s Cutting-Edge AI is Personalizing Medicine

TwinH: The AI Game-Changer for Faster, More Accessible Legal Services

US government charges former L3Harris cyber chief with trade secret theft

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Meet Your Digital Twin: Europe’s Cutting-Edge AI is Personalizing Medicine

TwinH: The AI Game-Changer for Faster, More Accessible Legal Services

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.