Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Vercel announces some of its customers’ data was stolen before recent hack

Another customer of troubled startup Delve suffers a major security incident

$290M DeFi Hack, macOS LoL Abuse, ProxySmart SIM Farms +25 New Stories

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Project Glasswing proved that AI can find bugs. Who will fix it?
Identity

Project Glasswing proved that AI can find bugs. Who will fix it?

By April 23, 2026No Comments8 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Last week, Anthropic announced Project Glasswing, an AI model so effective at finding vulnerabilities in software, that it took the unusual step of delaying its public release. Instead, the company granted access to Apple, Microsoft, Google, Amazon, and other allied companies so they could find and patch bugs before adversaries did.

Mythos Preview, the model that led to Project Glasswing, found vulnerabilities in all major operating systems and browsers. Some of these bugs have survived decades of human auditing, aggressive fuzzing, and open source scrutiny. One of them used OpenBSD, generally considered one of the most secure operating systems in the world, for 27 years.

It’s tempting to file this as “AI lab says AI is too dangerous”, which is the same playbook that OpenAI ran with GPT-2.

Not so fast. This time there are material differences.

Mythos didn’t just find individual CVEs.

We chained together four separate bugs into an exploit sequence that bypassed both the browser renderer and the OS sandbox. Performed local privilege escalation on Linux through a race condition. We built a 20-gadget ROP chain targeting FreeBSD’s NFS server and distributed it across packets.

Anthropic’s previous Frontier model, Claude Opus 4.6, was a near-total failure in autonomous exploit development. Mythos achieved a 72.4% success rate with the Firefox JS shell.

This is not theoretical or a new prediction for 3-5 years. This is about to become a real-world engineering reality.

Why Project Glasswing reveals real gaps in cybersecurity

These are the numbers that keep security leaders up at night. Less than 1% of vulnerabilities discovered by Mythos are patched.

Let’s understand that for a moment.

The most powerful vulnerability discovery engine ever built ran against the world’s most critical software, and the ecosystem couldn’t absorb its output.

Glasswing solved the search problem.

No one solved the fix problem.

Why defenders can’t keep up: Calendar speed and car speed

This is a long-standing structural problem in the cybersecurity industry. AI has made it impossible to ignore.

Defenders operate at calendar speed. They are:

Gather Intelligence Build Campaigns Simulate Threats Reduce Repetition

This cycle takes approximately 4 days in good weather. Attackers, especially those currently leveraging LLM at every stage of their operations, are moving at machine speed.

As an update, Atlassian CISO David B. Cross will be speaking at the Autonomous Validation Summit on May 12th about what this looks like from the inside, why regular testing can’t keep up with autonomous adversaries, and what defenders should do instead.

AI-powered attacks are already autonomous

Earlier this year, attackers deployed a custom MCP server hosting LLM as part of an attack chain against FortiGate appliances.

AI handled everything.

Automated backdoor creation Mapping of internal infrastructure fed directly into the model Autonomous vulnerability assessment, and AI-favored execution of offensive tools for domain administrator access.

result? 2,516 organizations in 106 countries were compromised simultaneously. The entire chain, from initial access to credential dumping and data exfiltration, was autonomous. The only human involvement was to review the results afterwards.

AI-based vulnerability discovery is outpacing remediation

The difference between attacker speed and defender speed is nothing new.

What is new is that the small but alarming gap has become a gorge.

Autonomous systems like AISLE discovered 13 out of 14 OpenSSL CVEs in recent tuning releases, bugs that had withstood years of human review. XBOW surpassed all human participants in 2025 to become the top-ranked hacker on HackerOne. The median time from publication to weaponization decreased from 771 days in 2018 to single-digit hours in 2024. By 2025, the majority of exploits will be weaponized before they are made public.

Next, add a Mythos class discovery to this image.

A safer world does not come automatically. You get a tsunami of legitimate discoveries that haven’t fundamentally changed in a decade and still require human validation, organizational processes, business continuity considerations, and patch cycles.

How to build a Mythos-enabled security program

The first thing you instinctively ask after Glasswing is, “How can I find more bugs?”

That’s actually the wrong question.

The correct answer is, “If thousands of exploitable vulnerabilities land on your desk tomorrow morning, will your program actually be able to handle them?”

For most organizations, the honest answer is no. The reason is not a lack of tools or talent. It’s a structural dependence on regular, human-initiated processes designed for a world where vulnerabilities trickle in, not a tsunami of vulnerabilities.

It is not possible to fix all vulnerabilities. Not all enhancement options can be applied.

It’s not defeatism, it’s a practical starting point for a security program that actually works. The important question is not “Is this CVE important?” But, “Given what I’ve introduced, is this vulnerability exploitable in my environment at this point?”

A Mythos-enabled security program requires three basic parts.

First: Signal-driven verification over scheduled tests

When new threats emerge, assets change, or configurations fluctuate, defenses must be tested against the specific changes in the moment. Not your next quarterly penetration test. Unless someone can find an open spot on your calendar.

The whole concept of “scheduled verification” assumes a stable threat landscape, but today that assumption is invalid on arrival.

Second: More environment-specific context than a general CVSS score

Glasswing generates an avalanche of CVEs.

However, most vulnerability management programs are still prioritized by CVSS scores. This context-free metric tells you how bad a bug is theoretically, rather than whether it is exploitable on a given infrastructure, taking into account controls and business risks.

Context-free prioritization doesn’t just slow down your work, even when the amount of findings suddenly increases from hundreds to thousands. The process will be completely interrupted.

Third: Closed-loop repair without manual handoff

Current models cannot survive in a world where attackers exploit CVEs within hours of information disclosure. You know this drill:

Scanner finds a bug, analyst prioritizes it, ticket goes to another team, someone patches it a few weeks later, no one re-verifies it.

This series of manual handoffs is exactly where the system breaks down. If the discovery-fix-revalidation cycle can only be performed by humans shuttling tickets between queues, then it’s clearly not running anywhere near the speed of machines.

This doesn’t mean buying more tools. This is about the defender exploiting one asymmetrical advantage. In other words, you know your organization’s topology, but the attacker does not.

This is a big advantage, but only if your machine’s speed allows you to do it.

How autonomous exposure verification fills the gap — and where Picus comes in

This is the part where I really want to be transparent about who is writing this.

At Picus Security, we are building a platform for Autonomous Exposure Validation. To be completely clear, I have an inherently biased perspective here. Please take it accordingly.

What Glasswing has made clear to us, and to many CISOs we’ve spoken with, is that the validation step within any exposure management program is truly the most important bottleneck.

Although finding vulnerabilities is becoming dramatically easier and more efficient, patching them remains extremely slow.

The only middle ground is knowing which ones are actually important to your environment. That’s verification.

From 4 days to 3 minutes: How agent workflows change cycles

We built Picus Swarm, an AI team that powers autonomous real-time verification, compressing traditional four-day cycles into minutes.

It’s a set of AI agents that work together to perform tasks that previously required handoff between four separate teams.

Researcher agents ingest and scrutinize threat intelligence. The Red Teamer agent maps it to the environment and generates a safety-checked attacker playbook. Simulator agents run across real endpoints and clouds and collect telemetry and attestation data. The coordinator agent bridges findings to remediation, opens tickets, triggers SOAR playbooks, pushes attack indicators to EDR, and revalidates after remediation is applied.

All actions are traceable and auditable, and all agents operate within the guardrails you define.

The entire chain from a new CISA alert to a verified, remediation-ready result takes approximately 3 minutes.

When a Mythos class of models drops thousands of findings into your organization, you need something that can instantly tell you which of them are exploitable in your environment. Which controls are effective, which are failing, and what are the vendor-specific fixes?

unpleasant truth

Project Glasswing will be measured by one metric: the number of vulnerabilities that are patched before they can be exploited. It’s not about how many are found or how impressive the exploit chain is, it’s about whether the ecosystem can digest what the AI ​​is trying to produce.

Visibility alone is never enough, and 83% of cybersecurity programs still fail to show measurable results. What changes the equation is closing the gap between confirming and proving whether a potential vulnerability actually compromises the environment.

That’s verification.

And in a post-Glaswing world, that’s all that stands between the flood of discovery and the flood of infringement.

On May 12th and 14th, we will be hosting the Autonomous Validation Summit in collaboration with Frost & Sullivan. The summit will include practitioners from Kraft Heinz and Grow Financial Services, as well as CTO Volkan Erturk. Together we will delve deeper into this particular issue.

>> Click here to register.

Note: This article was written by Sıla Özeren Hacıoğlu, Security Research Engineer at Picus Security.

Was this article interesting? This article is a contribution from one of our valued partners. Follow us on Google News, Twitter, and LinkedIn to read more exclusive content from us.

Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNew East African bat coronavirus can invade human cells
Next Article Surveillance vendor caught abusing access to telecom companies to track location of people’s phones, researchers say

Related Posts

$290M DeFi Hack, macOS LoL Abuse, ProxySmart SIM Farms +25 New Stories

April 23, 2026

Defeat automated exploits at the speed of AI

April 23, 2026

China-linked GopherWhisper infects 12 Mongolian government systems with Go backdoor

April 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Vercel announces some of its customers’ data was stolen before recent hack

Another customer of troubled startup Delve suffers a major security incident

$290M DeFi Hack, macOS LoL Abuse, ProxySmart SIM Farms +25 New Stories

Walton AI Facility boosts AI research in Ireland with €1 million investment

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.