
Google announced Thursday that it observed a North Korea-linked threat actor known as UNC2970 using its generative artificial intelligence (AI) model Gemini to conduct reconnaissance on targets. This is because various hacker groups continue to weaponize this tool to accelerate various stages of the cyberattack lifecycle, enable information manipulation, and even conduct model extraction attacks.
“The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance,” Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. “The attacker’s target profiling included searching for information about major cybersecurity and defense companies and mapping specific technical job roles and salary information.”
The tech giant’s threat intelligence team characterized this activity as blurring the line between routine professional investigation and malicious reconnaissance, allowing state-sponsored attackers to create customized phishing personas and identify soft targets for initial breaches.
UNC2970 is the nickname assigned to a North Korean hacker group that overlaps with clusters tracked as Lazarus Group, Diamond Sleet, and Hidden Cobra. The company is best known for orchestrating a long-running campaign codenamed Operation Dream Job that targeted the aerospace, defense, and energy sectors with malware under the pretext of approaching victims with job offers.
GTIG said UNC2970 has “consistently” focused on impersonating corporate recruiters and defense targets in its campaigns, and that target profiling has included searches for “information about major cybersecurity and defense companies, and mapping of specific technical job and salary information.”

UNC2970 is not the only threat actor to exploit Gemini to enhance its capabilities and move from initial reconnaissance to active targeting at a faster clip. Some of the other hacking teams that have integrated this tool into their workflows include:
UNC6418 (No Attribution), conducts targeted information collection, specifically seeking out sensitive account credentials and email addresses. Temp.HEX or Mustang Panda (China), which creates documents on specific individuals, including targets in Pakistan, and collects operational and structural data about separatist organizations in various countries. APT31 or Judgment Panda (China). Claim to be a security researcher to automate vulnerability analysis and create targeted test plans. APT41 (China). Extract instructions from open source tools’ README.md pages to troubleshoot and debug exploit code. UNC795 (China) troubleshoots code, conducts research, and develops web shells and scanners for PHP web servers. APT42 (Iran) facilitates reconnaissance and targeted social engineering by creating personas that drive engagement from targets, as well as developing a Python-based Google Maps scraper, developing a SIM card management system in Rust, and researching the use of a proof of concept (PoC) for the WinRAR flaw (CVE-2025-8088).
Google also announced that it detected malware called HONESTCUE, which leverages Gemini’s API to outsource next-stage feature generation, and an AI-generated phishing kit (codenamed COINBAIT) built using Lovable AI and disguised as a cryptocurrency exchange for credential harvesting. Some of the COINBAIT-related activity is believed to be attributable to a financially motivated threat cluster known as UNC5356.

“HONESTCUE is a downloader and launcher framework that sends prompts via Google Gemini’s API and receives C# source code in response,” the company said. “However, rather than leveraging LLM to update itself, HONESTCUE calls the Gemini API to generate code that powers a ‘stage 2’ functionality that downloads and executes additional malware. ”
HONESTCUE’s fileless second stage takes the generated C# source code received from the Gemini API and uses the canonical .NET CSharpCodeProvider framework to compile and execute the payload directly in memory, leaving no artifacts on disk.
Google is also calling attention to a series of recent ClickFix campaigns that leverage the public sharing capabilities of generative AI services to host realistic instructions for fixing common computer problems and ultimately distributing information-stealing malware. This activity was reported by Huntress in December 2025.
Finally, the company said it identified and thwarted a model extraction attack that aimed to systematically query proprietary machine learning models to extract information and build alternative models that reflected the target’s behavior. In this type of large-scale attack, Gemini was targeted with over 100,000 prompts asking a series of questions aimed at replicating the model’s reasoning abilities across a wide range of tasks in languages other than English.
Last month, Praetorian devised a PoC extraction attack. In this attack, the replica model achieved an accuracy rate of 80.1% by simply sending a series of 1,000 queries to the victim’s API, recording the output, and training for 20 epochs.
“Many organizations believe that keeping model weights private is sufficient protection,” says security researcher Farida Shafiq. “But this creates a false sense of security. In reality, the behavior is the model. Every query-response pair is a training sample for a replica. The model’s behavior is exposed through every API response.”
Source link
