Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Automattic plans to have 10 competitors subject to royalty fees, WP Engine claims in new filing

Google reports state-sponsored hackers are using Gemini AI to support reconnaissance and attacks

Lazarus campaign plants malicious packages in npm and PyPI ecosystem

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Google reports state-sponsored hackers are using Gemini AI to support reconnaissance and attacks
Identity

Google reports state-sponsored hackers are using Gemini AI to support reconnaissance and attacks

userBy userFebruary 12, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Ravi LakshmananFebruary 12, 2026Cyber ​​espionage/artificial intelligence

Google announced Thursday that it observed a North Korea-linked threat actor known as UNC2970 using its generative artificial intelligence (AI) model Gemini to conduct reconnaissance on targets. This is because various hacker groups continue to weaponize this tool to accelerate various stages of the cyberattack lifecycle, enable information manipulation, and even conduct model extraction attacks.

“The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance,” Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. “The attacker’s target profiling included searching for information about major cybersecurity and defense companies and mapping specific technical job roles and salary information.”

The tech giant’s threat intelligence team characterized this activity as blurring the line between routine professional investigation and malicious reconnaissance, allowing state-sponsored attackers to create customized phishing personas and identify soft targets for initial breaches.

UNC2970 is the nickname assigned to a North Korean hacker group that overlaps with clusters tracked as Lazarus Group, Diamond Sleet, and Hidden Cobra. The company is best known for orchestrating a long-running campaign codenamed Operation Dream Job that targeted the aerospace, defense, and energy sectors with malware under the pretext of approaching victims with job offers.

GTIG said UNC2970 has “consistently” focused on impersonating corporate recruiters and defense targets in its campaigns, and that target profiling has included searches for “information about major cybersecurity and defense companies, and mapping of specific technical job and salary information.”

UNC2970 is not the only threat actor to exploit Gemini to enhance its capabilities and move from initial reconnaissance to active targeting at a faster clip. Some of the other hacking teams that have integrated this tool into their workflows include:

UNC6418 (No Attribution), conducts targeted information collection, specifically seeking out sensitive account credentials and email addresses. Temp.HEX or Mustang Panda (China), which creates documents on specific individuals, including targets in Pakistan, and collects operational and structural data about separatist organizations in various countries. APT31 or Judgment Panda (China). Claim to be a security researcher to automate vulnerability analysis and create targeted test plans. APT41 (China). Extract instructions from open source tools’ README.md pages to troubleshoot and debug exploit code. UNC795 (China) troubleshoots code, conducts research, and develops web shells and scanners for PHP web servers. APT42 (Iran) facilitates reconnaissance and targeted social engineering by creating personas that drive engagement from targets, as well as developing a Python-based Google Maps scraper, developing a SIM card management system in Rust, and researching the use of a proof of concept (PoC) for the WinRAR flaw (CVE-2025-8088).

Google also announced that it detected malware called HONESTCUE, which leverages Gemini’s API to outsource next-stage feature generation, and an AI-generated phishing kit (codenamed COINBAIT) built using Lovable AI and disguised as a cryptocurrency exchange for credential harvesting. Some of the COINBAIT-related activity is believed to be attributable to a financially motivated threat cluster known as UNC5356.

“HONESTCUE is a downloader and launcher framework that sends prompts via Google Gemini’s API and receives C# source code in response,” the company said. “However, rather than leveraging LLM to update itself, HONESTCUE calls the Gemini API to generate code that powers a ‘stage 2’ functionality that downloads and executes additional malware. ”

HONESTCUE’s fileless second stage takes the generated C# source code received from the Gemini API and uses the canonical .NET CSharpCodeProvider framework to compile and execute the payload directly in memory, leaving no artifacts on disk.

Google is also calling attention to a series of recent ClickFix campaigns that leverage the public sharing capabilities of generative AI services to host realistic instructions for fixing common computer problems and ultimately distributing information-stealing malware. This activity was reported by Huntress in December 2025.

Finally, the company said it identified and thwarted a model extraction attack that aimed to systematically query proprietary machine learning models to extract information and build alternative models that reflected the target’s behavior. In this type of large-scale attack, Gemini was targeted with over 100,000 prompts asking a series of questions aimed at replicating the model’s reasoning abilities across a wide range of tasks in languages ​​other than English.

Last month, Praetorian devised a PoC extraction attack. In this attack, the replica model achieved an accuracy rate of 80.1% by simply sending a series of 1,000 queries to the victim’s API, recording the output, and training for 20 epochs.

“Many organizations believe that keeping model weights private is sufficient protection,” says security researcher Farida Shafiq. “But this creates a false sense of security. In reality, the behavior is the model. Every query-response pair is a training sample for a replica. The model’s behavior is exposed through every API response.”


Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLazarus campaign plants malicious packages in npm and PyPI ecosystem
Next Article Automattic plans to have 10 competitors subject to royalty fees, WP Engine claims in new filing
user
  • Website

Related Posts

Lazarus campaign plants malicious packages in npm and PyPI ecosystem

February 12, 2026

AI Prompt RCE, Claude 0-Click, RenEngine Loader, Auto 0-Days & 25+ Stories

February 12, 2026

Why 84% of security programs are late

February 12, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Automattic plans to have 10 competitors subject to royalty fees, WP Engine claims in new filing

Google reports state-sponsored hackers are using Gemini AI to support reconnaissance and attacks

Lazarus campaign plants malicious packages in npm and PyPI ecosystem

Imagination research bridges anthropology and future social challenges

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.