
Humanity on Wednesday revealed that in July 2025 it disrupted a sophisticated operation to weaponize the AI-powered chatbot Claude in July 2025.
“The actors targeted at least 17 different organizations, including healthcare, emergency services, government and religious institutions,” the company said. “As opposed to encrypt stolen information with traditional ransomware, the actor threatened to release the data in an attempt to force the victim to pay ransoms in excess of $500,000.”
“The actor adopted Kali Linux’s Claude code as a comprehensive attack platform, embedding operational instructions in the Claude.md file, which provides a permanent context for all interactions.”
It is said that unknown threat actors have used AI to a “unprecedented degree.” Use Claude Code, the human agent coding tool, to automate various phases of the attack cycle, including reconnaissance, qualification harvesting, and network penetration.
The reconnaissance efforts included scanning thousands of VPN endpoints to flag the susceptibility system, using them to obtain initial access, following up user enumeration and network discovery steps, extracting credentials and setting host persistence.
Additionally, attackers use Claude code to create a bespoke version of the Chisel Tunneling utility in their sidestep detection efforts, disguising malicious feasibles as legitimate Microsoft tools.

An activity called GTG-2002 is remarkable for using Claude to make “tactical and strategic decisions” and allows you to determine which data should be excluded from the victim network and craft’s targeted tor demand by analyzing financial data to determine the appropriate over-eating volume of $75,000 to $500,000 for Bitcoin.
For each humanity, Claude Codes were used to organize stolen data for monetization purposes, drawing thousands of individual records, including personal identifiers, addresses, financial information, and medical records from multiple victims. He then adopted this tool to create a multi-layered tor strategy based on customized ransom notes and XF filtered data analysis.
“Agent AI tools are now used to provide both technical advice and aggressive operational support for attacks that require a team of operators,” Anthropic said. “This makes defense and enforcement more difficult as these tools can adapt in real-time to defensive measures such as malware detection systems.”
To mitigate the future threat of such “vibe hacking” threats, the company said it has developed custom classifiers to screen for similar behaviors and developed shared technology metrics with “main partners.”
Below is a list of other documented misuses of Claude –
Using Claude by North Korean operatives in connection with fraudulent remote IT worker schemes, British Cyber Criminal, CodeNamed GTG-5004 developed codename GTG-5004 After hiring the use of Claude, who developed GTG-5004, which developed GTG-5004, the evasion ability, encryption, and repetition prevention mechanisms to support daily work have been sold at DarkNet forums such as DREAD, CRYPTBB, and with $400 to $1,200 for other threat actors, Chinese threat actors used Claude to enhance cyber operations targeting vital infrastructure in Vietnam, and standard targeting agricultural sales strategies. Using Claude by Russian-speaking developers to create malware with model context protocol (MCP) and advanced Claude avoidance by threat actors running on XSS[.]The Cybercrime Forum, which aims to analyze Stealer Logs and build a detailed victim profile of Claude Code by Spanish-speaking actors, aims to maintain and improve the use of Claude Code to validate Claude and maintain and improve resold credit cards. To start an operational synthesis ID service that rotates between three card verification services, you can use the same name as “Card Checker”

The company also said it hampered attempts by North Korean threat actors linked to the infectious interview campaign to enhance its malware toolset, create fishing lures and create accounts on the platform to generate NPM packages.
Case studies have increased evidence that AI systems are being abused despite the various guardrails being burned into them, promoting speed and large-scale refined schemes.

“We don’t use AI using AI, such as the development of ransomware that previously required years of training,” said Alex Moix, Ken Lebedev and Jacob Klein of Anthropic, who called for AI’s ability to lower barriers to cybercrime.
“Cybercriminals and fraudsters incorporate AI into every stage of their operations, including profiling victims, analyzing stolen data, stealing credit card information, and creating false identities that allow fraudulent operations to expand their reach to more potential targets.”
Source link