
The China National Computer Network Emergency Response Technology Team (CNCERT) has issued a warning about security stemming from the use of OpenClaw (formerly Clawdbot and Moltbot), an open source, self-hosted autonomous artificial intelligence (AI) agent.
In a post shared on WeChat, CNCERT noted that the platform’s “inherently weak default security configuration” coupled with privileged access to the system to facilitate autonomous task execution capabilities could be explored by malicious parties to seize control of endpoints.
This includes risks arising from prompt injection. With prompt injection, malicious instructions embedded within a web page can potentially expose sensitive information if an agent is tricked into accessing and consuming the content.
This attack is also known as Indirect Prompt Injection (IDPI) or Cross-Domain Prompt Injection (XPIA). Instead of directly interacting with large-scale language models (LLMs), attackers are armed with benign AI capabilities such as web page summarization and content analysis to execute manipulated instructions. This ranges from circumventing AI-based ad review systems and influencing hiring decisions to generating biased responses through search engine optimization (SEO) poisoning and suppressing negative reviews.
OpenAI said in a blog post published earlier this week that prompt injection-style attacks are evolving to include elements of social engineering beyond simply placing instructions in external content.
“AI agents are increasingly able to browse the web, obtain information, and perform actions on behalf of users,” the report said. “While these features are useful, they also create new ways for attackers to try to manipulate the system.”
The risk of prompt injection in OpenClaw is not hypothetical. Last month, PromptArmor researchers discovered that the link preview feature in messaging apps like Telegram and Discord could be turned into a data leakage vector when communicating with OpenClaw via indirect prompt injection.
At a high level, an AI agent can be tricked into generating an attacker-controlled URL that, when displayed as a link preview in a messaging app, automatically sends sensitive data to that domain without the need for the link to be clicked.
“This means that agent systems with link previews do not require users to click on malicious links, and data exfiltration can occur as soon as the AI agent responds to the user,” the AI security firm said. “In this attack, the agent is manipulated to construct a URL that uses the attacker’s domain, appended with dynamically generated query parameters containing sensitive data that the model knows about the user.”

Besides the fraudulent prompts, CNCERT also highlights three other concerns.
Potential for OpenClaw to accidentally and irrevocably delete sensitive information due to a misinterpretation of your instructions. Threat actors can upload malicious skills to repositories such as ClawHub that, once installed, can execute arbitrary commands or deploy malware. Attackers could exploit recently disclosed security vulnerabilities in OpenClaw to compromise systems and leak sensitive data.
“For critical sectors such as finance and energy, such breaches can lead to the leakage of core business data, trade secrets, code repositories, or even complete paralysis of entire business systems, causing untold losses,” CNCERT added.
To combat these risks, we recommend that users and organizations tighten network controls, prevent OpenClaw’s default management ports from being exposed to the internet, isolate services into containers, avoid storing credentials in clear text, download skills only from trusted channels, disable automatic skill updates, and keep agents up to date.
The development comes as Chinese authorities move to restrict state-run companies and government agencies from running the OpenClaw AI app on office computers to contain security risks, Bloomberg reported. The ban is said to extend to military families as well.
The viral popularity of OpenClaw has also led threat actors to take advantage of this phenomenon by distributing malicious GitHub repositories masquerading as OpenClaw installers to deploy information stealers such as Atomic and Vidar Stealer, as well as Golang-based proxy malware known as GhostSocks using ClickFix-style instructions.
“This campaign did not target any specific industry, but rather broadly targeted users attempting to install OpenClaw using a malicious repository containing download instructions for both Windows and macOS environments,” Huntress said. “What made this successful is that the malware was hosted on GitHub, and this malicious repository became the top-rated candidate in Bing’s AI search results for OpenClaw Windows.”
Source link
