Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Hackers use AI to develop first known zero-day 2FA bypass for large-scale exploitation

There is a shortage of rockets for space data centers. Cowboy Space raised $275 million to build them.

Linux Rootkit, macOS Crypto Stealer, WebSocket Skimmers and More

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Hackers use AI to develop first known zero-day 2FA bypass for large-scale exploitation
Identity

Hackers use AI to develop first known zero-day 2FA bypass for large-scale exploitation

By May 11, 2026No Comments7 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Zero-day 2FA bypass for large-scale exploitation

Google said on Monday that it had identified an unknown attacker using a zero-day exploit that was likely developed on an artificial intelligence (AI) system, marking the first time the technology has actually been used in a malicious context for vulnerability discovery and exploit generation.

The activity is said to be the work of cybercriminal threat actors, who appear to have worked together to plan what the tech giant describes as a “massive vulnerability exploitation operation.”

“Analysis of exploits associated with this campaign identified a zero-day vulnerability implemented in a Python script that allows users to bypass two-factor authentication (2FA) in a popular open-source web-based systems management tool,” Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News.

The tech giant said it worked with affected vendors to responsibly disclose and fix the flaws in order to disrupt the activity. The name of the tool was not disclosed.

While there is no evidence to suggest that Google’s Gemini AI tool was used to assist attackers, GTIG assessed with high confidence that the AI ​​models were weaponized to facilitate flaw discovery and weaponization via Python scripts with all the characteristics typically associated with large language model (LLM) generated code.

“For example, the script is rich in educational documentation strings containing hallucinatory CVSS scores and uses the structured, textbook-like Python formatting that is so characteristic of LLM training data (e.g., detailed help menus and clean _C ANSI color classes),” GTIG added.

This vulnerability is described as a 2FA Bypass and requires valid user credentials to exploit. This is due to flaws in high-level semantic logic that occur as a result of hard-coded trust assumptions, which LLM is good at discovering.

“AI is already accelerating vulnerability discovery and reducing the effort required to identify, verify, and weaponize flaws,” Ryan Dewhurst, head of threat intelligence at watchTowr, told The Hacker News in a statement. “This is the reality today. Discovery, weaponization, and exploitation are happening faster. We are not moving towards compressed timelines. We have seen timelines compressed for years. There is no mercy for attackers and no opt-out for defenders.”

This development was done so that AI not only acts as a power multiplier for vulnerability disclosure and exploitation, but also enables attackers to develop polymorphic malware and perform autonomous malware operations, as observed in the case of PromptSpy, an Android malware that leverages Gemini to analyze the current screen and provide instructions to pin malicious apps to the recent apps list.

Further investigation of the backdoor revealed a broader feature set that allows the malware to navigate the Android user interface, autonomously monitor and interpret real-time user activity, and determine its next course of action using an autonomous agent module.

PromptSpy also has the ability to capture a victim’s biometric data and replay authentication gestures, such as a lock screen PIN or pattern, to regain access to a compromised device. Furthermore, you can prevent uninstallation by making use of the “AppProtectionDetector” module. The module identifies the on-screen coordinates of the “Uninstall” button and provides an invisible overlay directly above the button to block the victim’s touch events and give the impression that the button is unresponsive.

“Although PromptSpy initializes with hard-coded default infrastructure and credentials, the malware is designed with high operational resilience, allowing attackers to rotate critical components at runtime without having to redeploy the PromptSpy payload,” Google said.

“In particular, the malware’s command and control (C2) infrastructure, including Gemini API keys and VNC relay servers, can be updated dynamically over the C2 channel. This configuration model shows that the developers designed the backdoor to anticipate defensive countermeasures and maintain their presence even if a particular infrastructure endpoint is identified and blocked by a defender.”

Google said it took action against PromptSpy by disabling all assets related to the malicious activity. No apps containing malware were found in the Play Store. Here are some other instances of Gemini-specific fraud we discovered:

A suspected China-linked cyber espionage group called UNC2814 asked Gemini to assume the role of a network security expert to trigger persona-driven jailbreaks and support vulnerability probes of embedded device targets, including TP-Link firmware and Odette File Transfer Protocol (OFTP) implementations. The North Korean threat actor known as APT45 (also known as Andariel and Onyx Sleet) sent “thousands of repeated prompts” recursively analyzing various CVEs and validating proof-of-concept (PoC) exploits. A Chinese hacker group known as APT27 used Gemini to accelerate the development of a fleet management application aimed at managing operational relay box (ORB) networks. A series of Russian-aligned intrusions targeted organizations in Ukraine to deliver AI-enabled malware called CANFAIL and LONGSTREAM. Both use decoy code generated by LLM to hide their malicious functionality.

We also discovered that threat actors were experimenting with a specialized GitHub repository named “wooyun-legacy” designed as a Claude Code Skills plugin featuring over 5,000 real-world vulnerability cases collected by Chinese vulnerability disclosure platform WooYun between 2010 and 2016.

“Preparing the model with vulnerability data facilitates contextual learning, which guides the model to approach code analysis like a seasoned expert and helps identify logic flaws that the base model might not prioritize,” Google explained.

Elsewhere, threat actors believed to be aligned with China are said to have deployed agent tools such as Hexstrike AI and Strix in attacks that targeted Japanese technology companies and East Asia’s leading cybersecurity platforms, performing automated detection with minimal human oversight.

Google also said it continues to see information operations (IO) actors in Russia, Iran, China, and Saudi Arabia using AI for common productivity tasks such as research, content creation, and localization, while condemning UNC6201’s China-related threat activity that involves the use of publicly available Python scripts to automatically register and immediately cancel premium LLM accounts.

“This process highlights the techniques attackers use to procure high-level AI capabilities at scale while insulating malicious activity from account bans,” GTIG noted.

“Threat actors are now seeking de-identified, premium-tier access to models through specialized middleware and auto-enrollment pipelines to fraudulently circumvent usage restrictions. This infrastructure enables large-scale abuse of the service while subsidizing operations through trial abuse and programmatic account rotation.”

Another China-related activity reported by Google comes from UNC5673 (also known as TEMP.Hex), which appears to utilize a variety of publicly available commercial tools and GitHub projects to facilitate scalable LLM exploitation.

This finding overlaps with recent reports of a thriving gray market of API relay platforms that allow local developers in China to gain unauthorized access to Anthropic Claude and Gemini. These relay or transfer stations route access to AI models through proxy servers hosted outside of mainland China. The service is advertised on Chinese online marketplaces Taobao and Xianyu.

In a study published in March 2026, scholars at the CISPA Helmholtz Center for Information Security discovered 17 shadow APIs that claim to provide access to official model services without geo-restrictions through indirect access. Performance evaluations of these services revealed evidence of model substitution, exposing AI applications to unintended safety risks.

“On high-risk medical benchmarks like MedQA, the accuracy of the Gemini-2.5 flash model dropped sharply from 83.82% for the official API to approximately 37.00% across all shadow APIs investigated,” the researchers said in their paper.

Additionally, proxy services can capture every prompt and response that passes through a server, giving operators unauthorized access to a treasure trove of data that can be used to fine-tune models or distill illegal knowledge.

In recent months, AI environments have also become a target for adversaries like TeamPCP (aka UNC6780), exposing developers to supply chain attacks and allowing attackers to penetrate deeper into compromised networks for subsequent exploits.

“For example, a threat actor with access to an organization’s AI systems could leverage internal models and tools to identify, collect, and exfiltrate sensitive information at scale, or perform reconnaissance tasks to move deep within networks,” Google said. “While the level of access and specific use is highly dependent on the organization and the specific compromised dependencies, this case study illustrates the pervasiveness of software supply chain threats to AI systems.”


Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThere is a shortage of rockets for space data centers. Cowboy Space raised $275 million to build them.

Related Posts

Linux Rootkit, macOS Crypto Stealer, WebSocket Skimmers and More

May 11, 2026

Your purple team is not purple – just red and blue in the same room

May 11, 2026

Fake OpenAI privacy filter repository hits #1 spot with ‘hug face’, attracts 244,000 downloads

May 11, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Hackers use AI to develop first known zero-day 2FA bypass for large-scale exploitation

There is a shortage of rockets for space data centers. Cowboy Space raised $275 million to build them.

Linux Rootkit, macOS Crypto Stealer, WebSocket Skimmers and More

Scientists race to understand the health risks of microplastics

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.