
Agentic AI is already running in production environments in many organizations today. It performs tasks, consumes data, and takes actions, perhaps without any meaningful involvement from your security team. The industry debate has largely framed this as a policy issue: to allow, restrict, or monitor. But that framework misses the point.
The more pressing question is whether security professionals actually understand what they are dealing with. In most organizations, this is not the case at this time. And that gap grows with each passing week.
We cannot guarantee what we do not understand.
The basic principles of information security have not changed. In other words, to meaningfully defend a technology, you need to truly understand it fluently.
Think about firewalls. If you don’t understand your network, you can’t configure it properly. When cloud computing came along, organizations that skipped the basics were left with an environment they could never understand, buying tools and writing policies that they had no real control over. Today, we have cloud security as a discipline of its own because technology required practitioners to become deeply familiar with cloud security before security followed suit.
The same dynamics are playing out in AI, at a faster pace and with higher stakes.
The practical impact of delays to agent AI goes beyond technical issues. Security teams that cannot speak the language of AI engineering—those who cannot challenge design decisions, suggest actionable controls, or ask informed questions—will be ignored. Businesses operate without security teams, not out of malicious intent, but because a security team that can’t really engage with technology is not a useful partner in technology decisions. This has happened with every major technology change over the past 20 to 30 years. AI is no exception.
The starting point is engagement. Try building an agent. Try tools that developers are already using. This practical familiarity is the beginning of real understanding, and real understanding makes everything else possible.
Three categories of agents, three categories of risks
The agent AI landscape is wide-ranging, and risk profiles vary widely across it. It is worth having a clear understanding of the three categories.
The first is general-purpose coding and productivity agents, tools like Claude Code and GitHub Copilot. These are already integrated into developer and engineering workflows across your organization. It is used whether or not it is officially approved. What data you can access, how you can interact with your codebase, and what actions you can take are currently baseline security knowledge.
The second is a vendor-built agent that leverages the Model Context Protocol (MCP). MCP is an integration layer that allows agents to connect to external services and act on their behalf. Almost every major vendor has or is actively building MCP servers in production. In practice, this means that an agent that manages a user’s calendar, email, or internal ticketing system can receive input from these channels and act on it. Malicious calendar invites with hidden instructions in the event description are a real attack vector. The agent reads it, interprets the embedded prompts, and executes them. This is a real attack surface that requires deliberate configuration and security review.
The third category is custom agents built by individual users, and this is where the dynamics are particularly interesting. For years, there has been a real barrier between security professionals who understand the risks and the code running in their environments. Most security professionals are not programmers. Building custom tools required development skills that were not widely distributed across the security team.
That barrier is gone.
Agent AI allows anyone in your organization to build functional tools such as automation, workflows, and agents that give you access to real-world systems without writing traditional code. For security teams, this is really valuable. Incident investigation, forensic triage, and threat hunting workflows – these can be accelerated if practitioners can build the tools they actually need. But the same functionality extends to all other teams as well. Marketing, finance, operations — now anyone can build an agent. Many people will. Most of these agents do not undergo security reviews before they go into production. This is another form of supply chain problem.
Fees for being late
There is a consistent pattern when security teams are not keeping up with key technology migrations.
First, the rest of the organization works without any security input. Deployed by developers, adopted by business departments, with security considered either only formally or not at all. Second, exposure is compounded. The more powerful agents an organization deploys, the more access those agents need. A wide range of privileges make agents useful, including access to calendars, communication platforms, file systems, code repositories, and internal APIs. This access also increases the explosion radius if something goes wrong.
Agents with access to both the terminal and the email inbox can operate through either channel and work through the other channel. This is the lateral movement path that an attacker looks for. To reason about this, we need to understand how the agent was constructed. Such understanding can only come from genuine engagement with technology.
skills that are important now
Building agent-based AI security capabilities requires two different layers of knowledge.
The first is to understand how AI applications are designed from a practitioner’s perspective rather than a data scientist’s perspective. What are the components of an AI application? How do agents consume input, concatenate tools, and produce output? What does a session with an agent connected to an MCP actually look like from an access control perspective? This is the foundation that makes everything else possible.
The second layer is currency. The tools and threat landscape surrounding AI is rapidly changing. Vendors are building security controls for AI systems, but most are still in the maturity stage. Open source frameworks are emerging. OWASP and others publish threat taxonomies that evolve weekly. Once the foundational layer is in place, staying up to date becomes an ongoing discipline. That means you’ll know which tools are worth evaluating, which frameworks are gaining traction, and what questions to ask when vendors offer solutions.
This second point is more important than you might think. Security teams are already being approached by vendors selling AI security products. Without a basic knowledge of how these applications are built, it’s almost impossible to have a successful conversation. You can’t tell the difference between a well-designed control and a marketing wrapper unless you understand what you’re trying to control.
Configuration as a security control
Many agent AI deployments come with risks. This is not because the underlying tooling is fundamentally broken, but because it was introduced without security-conscious configuration.
Consider a self-hosted AI assistant connected to a popular Telegram-like communication channel. Without proper controls, agents can respond to anyone who sends them a message. It’s a wide open entry point. A simple configuration change (pairing the agent with a single trusted account) eliminates most of that risk. Making one decision early can have important security consequences.
The broader principle is scope. Agents built to manage calendars cannot access the device. Agents that process incoming requests must not have write access to the code repository. Limiting the range of the agent to its intended function limits the blast radius and reduces the attack surface for exploitation.
This tension is real. Powerful agents require broad access to be useful. It’s a trade-off that organizations resist. To find the right balance, security needs to be involved early in the design process, before the architecture is set and before permissions are already set.
Stay ahead with SANSFIRE 2026
Organizations that build true AI security fluency will be in a position to shape how these systems are deployed. Those who arrive late find themselves once again applying controls to an architecture that was already determined without them.
This July, I will be teaching SEC545: GenAI and LLM Application Security at SANSFIRE 2026. This course covers how AI applications are actually built, how agent systems actually work, the real attack surfaces that security teams need to understand, and the tools and controls available to address them. This includes hands-on work with techniques such as model scanning to detect compromised models before they run in your environment. For practitioners who want to approach AI systems from a true foundation of understanding, this is the place to start.
Click here to register for SANSFIRE 2026.
Note: This article was professionally written and contributed by Ahmed Abugharbia, a SANS Certified Instructor.
Source link
