
Last week, Anthropic restricted its Mythos preview model following the autonomous discovery and exploitation of zero-day vulnerabilities across all major operating systems and browsers. Palo Alto Networks’ Wendy Whitmore warned that it could take weeks or months for similar features to become widespread. According to CrowdStrike’s 2026 Global Threat Report, the average breakout time for electronic crimes is 29 minutes. According to Mandiant’s M-Trends 2026, adversary handoff time has been reduced to 22 seconds.
The attacks are getting faster. The question is, where exactly are defenders slow? Because that’s not what most SOC dashboards suggest.
The detection tools have been significantly improved. EDR, cloud security, email security, identity, and SIEM platforms have built-in detection logic that brings the MTTD of known technologies closer to zero. This is real progress and the result of years of investment in detection engineering across the industry.
But when attackers operate on timelines measured in seconds or minutes, the question is not whether detection happens fast enough. That’s what happens between the time the alert goes off and the time someone actually picks it up.
Gap after alarm
The clock continues to run even after an alert occurs. Analysts need to see it, pick it up, assemble context from across the stack, investigate it, make decisions, and begin responding. In most SOC environments, this sequence is where the majority of an attacker’s operational window actually lies.
Analysts are looking into something else. The alert is queued. The context spans four or five tools. The investigation itself requires querying the SIEM, reviewing ID logs, retrieving endpoint telemetry, and correlating timelines. A thorough investigation (one that yields valid judgments rather than something close to intuition) would require 20 to 40 minutes of hands-on work, assuming the analyst starts right away, but analysts rarely do this.
For a 29-minute breakout window, no investigation was initiated when the attacker moved laterally. Alerts may still be in the queue for the 22 second handoff.
MTTD does not capture this at all. This measures how quickly detection occurs, and the industry is making real progress in that regard. But that indicator stops with an alert. There’s no word on how long the post-alert window actually was, how many alerts received actual investigation and brief investigation, and how many alerts were closed en masse without meaningful analysis. MTTD reports on some of the issues that the industry is already grappling with in earnest. The downstream exposure, the post-alarm investigation gap, is not reflected anywhere.
What changes when AI processes your investigation?
AI-powered investigations do not improve detection speed. MTTD is and remains a detection engineering metric. What the AI compresses is the post-alarm timeline, which is exactly where the real danger lies.

The queue will disappear. All alerts, regardless of severity or time of day, are investigated upon arrival. Assembling context now takes seconds, compared to the 15 minutes it took an analyst to switch between tabs. The investigation itself involves reasoning based on evidence, pivoting based on findings, and reaching a conclusion, which can be completed in minutes instead of an hour.
This is what we built Prophet AI for. Investigate every alert with the depth and reasoning of a senior analyst at machine speed. This means dynamically planning investigations and querying relevant data sources to draw transparent, evidence-backed conclusions. There are no queues or wait times in this model, so there are no post-alert gaps. For teams working towards this benchmark, we’ve published practical steps to reduce survey time to less than 2 minutes.
The same structural constraints apply to MDR. MDR analysts still face the same post-alert bottleneck because they are tied to human investigation capabilities. The shift from outsourced human to AI investigations completely removes that ceiling and changes what can be measured about a SOC’s actual performance.
Important indicators now
Once the post-alert window collapses, traditional velocity metrics are no longer the most informative metrics. A MTTI of 2 minutes is meaningful in the first reporting quarter. After that it’s table stakes. The question changes from “How fast?” “How much has your security posture strengthened over time?”
Four metrics capture this.
Survey coverage rate. What percentage of alerts receive a full investigation with a complete set of questions accompanied by evidence? In a traditional SOC, this number is typically 5-15%. The rest are skimmed, closed all together, or ignored. In an AI-driven SOC, it should be 100%. This is the most important metric for understanding whether your SOC is actually aware of what’s going on in its environment. Sensing surface range. MITER ATT&CK technology coverage is mapped against detection libraries to identify gaps and tracked over time. This means continually mapping the detection area, identifying technologies with weak or no coverage, and flagging scenarios where there is a single point of failure or where the only detection rule is between complete organizational and technology blindness. Detection engineering in AI-driven SOCs requires rethinking how this surface is maintained. False positive feedback rate. How quickly are findings fed back into detection tuning? In most SOCs, this loop runs based on human memory and quarterly review cycles. The goal state is continuous. Findings should be translated directly into detection optimization, noise suppression, and signal improvement without waiting for scheduled reviews. Hunt-driven detection creation rate. How many persistent detections were created from proactive hunting findings rather than from incident response? This measures whether the hunting program is increasing detection coverage or simply generating reports. The most powerful implementations tie hunting directly to detection gaps, perform hypothesis-based hunting against techniques with the weakest coverage, and convert confirmed results into persistent detection rules.
These measurements only make sense once AI does the actual investigative work, but they represent a fundamentally different view of SOC performance that emphasizes security outcomes rather than operational throughput.
The Mythos disclosure crystallized what the security industry already knew but had not fully penetrated. That means AI is accelerating attacks at a pace that humans cannot sustain investigations at. There’s no need to panic about AI-generated exploits. It’s about the defender actually closing the slow gap, the post-alarm investigation window, and starting to measure whether that gap is closing.
Teams that move from reporting detection velocity to reporting coverage and detection improvements will gain a clearer picture of their actual risk posture. Clarity is important when an attacker puts AI to work.
Prophet Security’s Agentic AI SOC platform investigates every alert by senior analysts, continuously optimizing detections, and performing directed threat hunting for coverage gaps. Visit Prophet Security to see how it works.
Source link
