Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

AISURU/Kimwolf botnet launches record 31.4 Tbps DDoS attack

Data breach at government tech giant Conduent balloon affects millions more Americans

Fundamental raises $255 million in Series A for new big data analytics initiative

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » How to integrate AI into modern SOC workflows
Identity

How to integrate AI into modern SOC workflows

userBy userDecember 30, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

AI SOC workflow

Artificial intelligence (AI) is rapidly being introduced into security operations, but many practitioners still struggle to turn early experiments into consistent operational value. This is because SOCs are implementing AI without a deliberate approach to operational integration. Some teams treat this as a shortcut for broken processes. Some people try to apply machine learning to problems that are not well defined.

The results of the 2025 SANS SOC survey reinforce that disconnect. While a significant portion of organizations are already experimenting with AI, 40% of SOCs are using AI or ML tools without defining them as part of their operations, and 42% rely on “out-of-the-box” AI/ML tools without any customization. The results are a common pattern. AI exists within the SOC, but it is not operational. Analysts are using it informally, often with mixed confidence, but leadership has yet to establish a consistent model for where AI belongs, how to validate its output, and which workflows are mature enough to benefit from expansion.

AI can realistically improve SOC capabilities, maturity, process repeatability, and even staff competency and satisfaction. This only works if the team narrows down the scope of the problem, validates the logic, and treats the output with the same rigor one would expect from engineering work. The opportunity lies not in creating new categories of work, but in improving existing categories and enabling testing, development, and experimentation to extend existing functionality. When AI is applied to specific, well-defined tasks and combined with a clear review process, its impact becomes more predictable and more useful.

Here are five areas where AI can provide reliable support to SOCs.

1. Detection engineering

Detection engineering is essentially building high-quality alerts that can be placed into your SIEM, MDR pipeline, or another operational system. For logic to be executable, it must be developed, tested, refined, and operated with a high level of confidence, leaving little room for ambiguity. This is where AI tends to be applied ineffectively.

Don’t assume that AI will fix DevSecOps flaws or solve problems in your alert pipeline unless that’s your desired outcome. AI is useful when applied to well-defined problems that can support ongoing operational validation and adjustment. One clear example from the SANS SEC595: Applied Data Science and AI/ML for Cybersecurity course is a machine learning exercise that examines the first 8 bytes of a packet stream to determine whether the traffic can be reassembled as DNS. If the rebuild does not match what was previously seen for DNS, the system generates a high-fidelity alert. Its value comes from the accuracy of the task and the quality of the training process, rather than from widespread automation. The expected implementation is to inspect all flows over UDP/53 (and TCP/53) and evaluate the reconstruction loss from the machine learning tuned autoencoder. Streams that violate the threshold are flagged as anomalies.

This detailed example shows the AI ​​engineering detection that can be implemented. Create a clear and testable classification problem by examining the first 8 bytes of the packet stream and seeing if it is reassembled as DNS based on patterns learned from historical traffic. If these bytes do not match what is normal in DNS, the system issues a warning. AI can help here because the scope is narrow and the evaluation criteria are objective. This can be more effective than heuristic rule-driven detection because it learns to encode/decode what it sees. Something unfamiliar (in this case DNS) cannot be encoded/decoded correctly. What AI cannot do is solve vaguely defined alert problems or fill in missing engineering areas.

2. Threat hunting

Threat hunting is often portrayed as a place where AI automatically “discovers” threats, but that misses the purpose of the workflow. Hunting is not production detection engineering. This should be the research and development function of the SOC, where analysts explore ideas, test hypotheses, and evaluate signals that are not strong enough to operationalize detection. This is necessary because the vulnerability and threat landscape is rapidly changing, and security operations must constantly adapt to the volatility and uncertainty of the information assurance world.

This job is exploratory, so AI is a good fit here. Analysts can use it to try approaches, compare patterns, and see if a hypothesis is worth investigating. It speeds up the initial stages of analysis, but it does not determine what is important. The model is a useful tool, not the final authority.

Hunting also has a direct impact on detection engineering. AI can help generate candidate logic or highlight anomalous patterns, but it’s still up to the analyst to interpret the environment and decide what the signals mean. If you can’t evaluate the AI’s output or explain why something is important, exploration may not yield anything useful. The advantage of AI here is not in certainty or judgment, but in speed and breadth of exploration. Alerting you to use operational security (OpSec) and information protection. Only provide information related to hunting to authorized systems, AI, or others.

3. Software development and analysis

Modern SOC runs on code. Analysts write Python to automate investigations, build PowerShell tools for host interrogation, and create SIEM queries tailored to their environments. This constant need for programming makes AI a natural fit for software development and analysis. You can create draft code, improve existing snippets, and accelerate logic construction that analysts previously built by hand.

But AI doesn’t understand the fundamental problem. Analysts must interpret and validate everything the model produces. If the analyst does not have a deep understanding of a particular field, the AI ​​output may sound right even when it is wrong, and the analyst may not be able to tell the difference. This poses unique risks. Analysts may ship or rely on code that they do not fully understand and have not properly tested.

AI is most effective here when it reduces mechanical overhead. This allows teams to reach available starting points faster. Supports writing code in Python, PowerShell, or SIEM query languages. However, the responsibility for accuracy rests with the people who understand the systems, data, and operational implications of running that code in a production environment.

The authors suggest that teams create good style guidelines for their code and use only approved (i.e. tested and approved) libraries and packages. Include guidelines and dependency requirements as part of all prompts, or use AI/ML development tools that allow configuration of these specifications.

4. Automation and orchestration

Automation has long been a part of SOC operations, but AI is reshaping the way teams design these workflows. Instead of manually piecing together action sequences or converting runbooks into automation logic, analysts can now use AI to draft scaffolding. AI can also outline steps, suggest branching logic, and translate plain language descriptions into the structured format required by orchestration platforms.

However, AI cannot decide when to perform automation. The core problem with orchestration remains the same. Is it necessary to take automated action immediately, or do you need to present information for an analyst to review first? The choice depends on your organization’s risk tolerance, the sensitivity of your environment, and the specific actions you are considering.

Regardless of whether the platform is SOAR, MCP, or other orchestration system, the responsibility for initiating actions should be on the human, not the model. AI can help build and improve workflows, but it shouldn’t have the power to enable workflows. Clear boundaries keep automation predictable, explainable, and aligned with the SOC’s risk posture.

An organization’s comfort level with automation will be the threshold that allows for rapid action in an automated manner. This level of comfort comes from extensive testing and people responding in a timely manner to actions taken by automated systems.

5. Reporting and communication

Reporting is one of the most persistent challenges in security operations. This isn’t because teams lack technical skills, but because it’s difficult to translate those skills into clear, actionable communication. The 2025 SANS SOC survey highlights how far behind the sector remains. 69% of SOCs still rely on manual or mostly manual processes to report metrics. This gap is important. When reporting is inconsistent, leadership loses visibility, context is diluted, and operational decisions are delayed.

AI offers an immediate and low-risk way to improve your SOC’s reporting performance. Standardize structure, improve clarity, and help analysts transition from raw notes to well-organized summaries to smooth out the mechanical parts of reporting. Rather than having each analyst write in a different style or burying the reader in technical details, AI helps produce consistent, readable output that the reader can quickly interpret. Emphasizing the overall consistency of the SOC, including moving averages and standard deviation bounds, is a story worth telling to management.

The value is not in making the report more sophisticated. It is about making them consistent and comparable. When all incident summaries, weekly aggregates, or metrics reports follow a predictable structure, leaders can recognize trends sooner and prioritize more effectively. It also gives analysts back time they would otherwise have spent on wording, formatting, or repetitive explanations.

Are you a taker, shaper or maker? Let’s talk at SANS Security Central 2026

As teams begin experimenting with AI across these workflows, it’s important to recognize that there is no single path to adoption. Leveraging SOC AI can be described in three useful categories. Takers use provided AI tools. Shapers adjust or customize these tools to suit your workflow. Manufacturers build something new, such as the strictly scoped machine learning detection example discussed earlier.

All of these use cases fit into one or more categories. You may be a taker and maker of detection engineering, implementing AI rules from your SIEM vendor as well as writing your own detections. Most teams are both manual authors and authors in reporting (just using out-of-the-box ticketing system reports). You may be an automation shaper, customizing parts of the vendor-provided SOAR runbooks. At the very least, I hope you’re using an IOC-driven hunt provided by your vendor. That’s what every SOC needs to do. Aspiring to self-directed hunting moves you into the manufacturer’s category.

It’s important that each workflow has clear expectations about where AI can be used, how output is validated, updates are continuous, and that analysts ultimately remain responsible for protecting information systems.

We will explore these topics in more detail during the keynote session at SANS Security Central 2026 in New Orleans. Learn how to assess your current SOC landscape and design an AI deployment model that enhances your team’s expertise. thank you!

Register for SANS Security Central 2026 here.

Note: This article was professionally written and contributed by SANS Senior Instructor Christopher Crowley.

Was this article interesting? This article is a contribution from one of our valued partners. Follow us on Google News, Twitter, and LinkedIn to read more exclusive content from us.

Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe year AI moves from experimentation to execution
Next Article Can we bring American infrastructure into the modern era?
user
  • Website

Related Posts

AISURU/Kimwolf botnet launches record 31.4 Tbps DDoS attack

February 5, 2026

Codespaces RCE, AsyncRAT C2, BYOVD Abuse, AI Cloud Intrusions & 15+ Stories

February 5, 2026

Buyer’s Guide to AI Usage Control

February 5, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

AISURU/Kimwolf botnet launches record 31.4 Tbps DDoS attack

Data breach at government tech giant Conduent balloon affects millions more Americans

Fundamental raises $255 million in Series A for new big data analytics initiative

Eleven Lab CEO: Voice is the next interface for AI

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.