
The dark secret of corporate security operations is that defenders have quietly institutionalized the habit of not looking. This is not just an anecdote, but is backed up by a recent report that examined over 25 million security alerts, including informational and low-severity alerts, across real-world enterprise environments.
The dataset behind these findings includes telemetry from 10 million monitored endpoints and identities, 82,000 forensic endpoint investigations including live memory scans, 180 million analyzed files, 7 million IP addresses, 3 million domains and URLs, and over 550,000 phishing emails.
The patterns that emerge from this data tell a consistent story. Threat actors are systematically exploiting the predictable gaps created by limited severity-based security operations. To understand where these gaps actually exist, you need to look at the big picture of alerts, starting with the categories that most teams have been conditioned to ignore.
1% problem with up to 1 missed violation per week
In this analysis of 25 million alerts, nearly 1% of confirmed incidents came from alerts that were initially classified as low severity or informational. For endpoints in particular, that number rose to nearly 2%.
At enterprise scale, such percentages are not noise. The average organization generates approximately 450,000 alerts per year. Of these, 1% are real threats, approximately 54 per year, or one per week, that are never investigated by traditional SOC or MDR models. Detection did not fail. The economics of triage simply made investigation impossible.
These are not theoretical risks at the end of an attacker’s wish list. These are real compromises hidden in categories of alerts that operations teams are trained to deprioritize.

EDR “mitigation” does not mean clean
The report’s endpoint findings deserve special attention because they challenge the basic assumption of most security programs that EDR remediation can be trusted at face value.
Of the 82,000 alerts that underwent live forensic memory scanning, 2,600 had active infections. Of the compromised endpoints, 51% were already marked as “mitigated” by the source EDR vendor.
In more than half of confirmed endpoint compromises detected through forensic analysis, EDR closed the ticket and declared the threat resolved. Without memory-level forensics, these infections remain invisible. The tools that most organizations rely on as their endpoint safety net report being clean on machines that aren’t.
Malware families found running in memory during these scans include Mimikatz, Cobalt Strike, Meterpreter, and StrelaStealer. These are workhorses of active criminal and state operations, rather than obscure proof-of-concept tools.
Email gateway left behind by phishing
The phishing data in the report reflects fundamental changes in attacker techniques that most email security architectures are not designed to capture.
Less than 6% of confirmed malicious phishing emails contained attachments. Mostly it depended on links and language. More importantly, attackers are moving their infrastructure to platforms that are trusted by default, such as Vercel, CodePen, OneDrive, and even PayPal’s own billing system.
One of the campaigns described in the report uses PayPal’s legitimate payment request infrastructure to send threat emails with callback numbers and Unicode homoglyphs embedded in payment notes to defeat signature-based detection. Because the email is truly coming from PayPal, the sending domain passes all standard authentication checks.
Cloudflare Turnstile CAPTCHAs have become a reliable signal of malicious intent. Sites using this were consistently more likely to be phishing pages, while Google reCAPTCHA was correlated with legitimate infrastructure. Attackers are leveraging mechanisms built to stop bots to stop automated security scanners instead.
This data identified four new techniques for bypassing email gateways. Base64 payloads hidden within SVG image files, links embedded in PDF annotation metadata that are unrecognizable to surface-level scanners, dynamically loaded phishing pages served through legitimate OneDrive shares, and DOCX files hiding archived HTML content containing QR codes. None of this is exotic. These are operational techniques used at scale.
Cloud telemetry revealed that the attacker was playing the game for a long time
The report’s cloud alert data shows a significant concentration on defensive evasion and persistence tactics, with relatively few high-impact behaviors such as lateral movement or privilege escalation appearing in the signals.
Attackers are being cautious and patient. The main pattern is long-term access. Manipulating tokens, exploiting legitimate cloud features, and obfuscating to avoid triggering higher severity detections. The goal is not to make noise, but to be present and undetected.
AWS misconfigurations silently exacerbate this risk. S3 accounts for approximately 70% of all cloud control violations in our dataset, with the most common issues centered around access management, server logging, and cross-account restrictions. These findings rarely trigger alerts. Most are classified as low severity. And once attackers gain a foothold, they can be exploited repeatedly, dramatically accelerating their next move.
Why traditional SOC and MDR can’t close this gap
This is an operational and capacity problem that, until recently, could not be solved by technology alone.
Human analysts do not adjust to alert volume. As telemetry expands across endpoint, cloud, identity, network, and SaaS, all SOCs will eventually hit the same limits. Proactive triage is the only way to stay on budget. Automate most closures, investigate only those deemed important, and trust that severity labels reflect reality. The 2026 data shows that trust is massively misplaced.
MDR providers face similar constraints. A human-scale operating model means that approximately 60% of alerts, whether processed in-house or outsourced, are still unreviewed. Adding more analysts moves the limit, but does not remove it. Although SOAR platforms automate workflows and require teams to design all playbooks, they are still no substitute for performing research.
An even more serious problem is the feedback loop that never closes. If low-severity alerts are not investigated, missed threats may never surface. Detection rules that fail to catch actual attacks will never be fixed. The system does not self-improve because the inputs required for improvement are never examined.
What will change if you check everything?
In order to investigate all 25 million alerts in the report cited above, we needed to remove constraints that had previously made complete coverage impossible. Specifically, human analyst capacity is the bottleneck. In this dataset, Intezer AI SOC was used for triage and investigation, with less than 2% of alerts escalated to human analysts, decision accuracy of 98%, and median triage time of less than 1 minute across all volumes.
The impact of a full investigation is measurable. When all alerts, regardless of severity, undergo forensic-grade analysis, triage results are based on evidence rather than assumptions about what a lower severity label means. Early-stage threats have weak initial signals and surface before they can progress. Detection engineering also has direct benefits, as every probe generates feedback that can be looped back to rule tuning at the source.
The practical consequence for human analysts is a change in where they spend their time. Escalations are less frequent and reliability is increased, so analysts are involved at the point of decision-making rather than spending their power on discovery and initial classification.
For the broader organization, this means a security posture that continually improves rather than maintains a stable security posture as the threat landscape changes.
To explore the full report and findings, see Intezer’s 2026 AI SOC Report for CISOs.
Source link
