
Most people know the story of Paul Bunyan. Challenges from a giant lumberjack, his trusty axe, and a machine that promises to outdo him. Paul strained and swung harder the way he had before, but still lost a quarter of an inch. His mistake was not losing the contest. His mistake was believing that he could win against new kinds of tools through effort alone.
Security professionals are facing similar moments. AI is the modern steam-powered saw. Fast in some areas, unfamiliar in others, and poses many challenges to long-held habits. The instinct is to protect what we know rather than learn what new tools can actually do. But if we follow Paul’s approach, we will find ourselves on the wrong side of the changes that are already underway. The right thing to do is to learn the tool, understand its capabilities, and leverage it to deliver results that make your job easier.
The role of AI in everyday cybersecurity operations
AI is now embedded in nearly every security product we touch. Endpoint protection platforms, email filtering systems, SIEMs, vulnerability scanners, intrusion detection tools, ticketing systems, and even patch management platforms tout some form of “intelligent” decision-making. The challenge is that most of this intelligence lives behind the curtain. Vendors protect their models as their own IP, so security teams only see the output.
This means that the model is silently making risk decisions in an environment where humans are still in charge. These decisions are made based on statistical inference rather than an understanding of the organization, its employees, or operational priorities. You cannot inspect an opaque model, nor can you rely on it to capture nuance or intent.
As a result, security professionals must build or adapt their own AI-powered workflows. The goal is not to rebuild commercial tools. The goal is to compensate for blind spots by building capabilities that you can control. When you design a small-scale AI utility, you decide what data to learn from, what to consider a risk, and how to operate. You regain influence over the logic that shapes your environment.
Remove friction and increase speed
Most of the security work is translational. If you’ve ever written a complex JQ filter, SQL query, or regular expression just to get a small piece of information from a log, you know how time-consuming that transformation step can be. These steps slow down research, not because it’s difficult, but because they interfere with the flow of thought.
AI can take away much of that translation burden. For example, I’ve been building small tools that put AI on the front end and a query language on the back end. Instead of writing a query yourself, you can ask what you want in plain English and the AI will generate the correct syntax to extract it. This becomes a human-to-computer translator, allowing you to focus on what you’re trying to investigate, rather than how the query language works.
In practice, this allows you to:
Retrieve logs associated with a specific incident without writing the JQ yourself Extract the data you need using AI-generated SQL or regular expression syntax Build small AI-assisted utilities that automate these repetitive query steps
Once AI handles the iterative transformation and filtering steps, security teams can focus their attention on higher-order inference, the part of the job that actually advances the investigation.
While AI can store more information than humans, it’s also important to remember that effective security doesn’t mean knowing everything. It’s about knowing how to apply what’s important in the context of an organization’s mission and risk tolerance. The AI will make decisions that are mathematically correct but contextually incorrect. You can approximate the nuances, but you can’t really understand them. You can simulate ethics, but you can’t feel responsible for the results. Statistical inference is not, and never will be, moral inference.
Our value across offensive, defensive and investigative roles is not to memorize information. It’s about applying judgment, understanding the nuances, and pointing the tools toward the right outcome. AI will enhance our actions, but we will still have the power to decide.
How security professionals can get started: Skills to develop now
Much of today’s AI work is done in Python, which has traditionally felt like a barrier for many security professionals. AI changes that dynamic. You can express your intent in plain English and let the model generate most of the code. The model will tell you most things. Your job is to fill in the remaining gaps in judgment and technical literacy.
It requires a baseline level of fluency. You need enough Python to read and improve what the model produces. To recognize when the logic is off, you need to understand how the AI system interprets the input. And even if you don’t build a complete model yourself, you need a hands-on understanding of core machine learning concepts to understand what your tools are doing beneath the surface.
With that foundation, AI becomes a power multiplier. You can build targeted utilities to analyze internal data, use language models to compress information that would take hours to process manually, and automate mundane steps that slow down investigations, aggressive testing, and forensic workflows.
Here are some specific ways to start developing these features:
Start by auditing your tools. Map where AI is already running in your environment and understand what decisions it’s making by default. Actively engage with AI systems. Don’t treat the output as final. Feed the model better data, question its results, and adjust its behavior where possible. Automate one task every week: Choose a recurring workflow and streamline some of it using Python and AI models. Small victories create momentum. Build light ML literacy: Learn the basics of how models interpret instructions, where instructions are interrupted, and how to redirect instructions. Join community learning: Share what you’ve built, compare approaches, and learn from others going through the same transition.
These habits worsen over time. They transform AI from an opaque feature in someone else’s product to a feature you can understand, direct, and use with confidence.
Dig deeper with SANS 2026
AI is changing the way security professionals work, but it will not diminish the need for human judgment, creativity, and strategic thinking. Understanding your tools and guiding them with intention will increase your ability, not diminish your need for it.
This topic will be covered in more detail in the SANS 2026 keynote. If you want practical, actionable guidance to enhance your AI fluency across defensive, offensive, and investigative domains, join us in this room.
Click here to register for SANS 2026.
Note: This article was expertly written by SANS Fellow Mark Baggett.
Source link
