
Openai said on Tuesday that it disrupted three activity clusters due to misuse of ChatGPT artificial intelligence (AI) tools to promote malware development.
This includes the threat actors in Russian. The Russian threat actor is said to have used chatbots to help develop and improve the Trojan horse (rat), a qualified thief intended to avoid detection. The operator also used several ChatGPT accounts to prototype and troubleshoot technical components that allow for post-explosion and credential theft.
“These accounts appear to be affiliated with Russian-speaking criminal groups because they observed them posting evidence of their activities on telegram channels dedicated to those actors,” Openai said.
The AI company said that the large-scale language model (LLMS) rejected the threat actor’s direct request to create malicious content, but bypassed the restrictions by creating building block code, it was then assembled to create workflows.
Some of the generated output included code for obfuscation, clipboard monitoring, and basic utilities for removing data using telegram bots. It is worth pointing out that none of these outputs are inherently malicious.
“Threat actors created a combination of advanced and low refinement requests. Many prompts required deep window platform knowledge and iterative debugging, while other prompts automated product tasks (such as mass password generation and scripted recruitment applications),” added Openai.
“Operators used a small number of ChatGPT accounts and repeated the same code throughout the conversation. This is a pattern that matches ongoing development rather than occasional testing.”
The second cluster of activities, born from North Korea, shared a duplicate with a campaign detailed by Trellix in August 2025, targeting the delivery of Xeno Rat using spear photography emails in Korea.

Openai said the cluster uses ChatGPT for malware and command and control (C2) development, while the actors are engaged in specific efforts such as developing MacOS Finder extensions, configuring Windows Server VPN, and converting Chrome extensions to Safari equivalents.
Additionally, threat actors have been found to use AI chatbots to draft phishing emails, experiment with cloud services and GitHub functions, and explore techniques that facilitate DLL loading, memory execution, Windows API hooks, and entitlement theft.
Openai has a third set of prohibited accounts shared with clusters that were tracked by Proofpoint under the name UNK_DROPPITCH (aka UTA0388).
The account used the tools to generate content for its phishing campaigns in English, Chinese and Japanese. It helps you with tools to accelerate everyday tasks such as remote execution using HTTPS and traffic protection. Find information related to installing open source tools such as Nuclei and FSCAN. Openai described the threat actor as “technically capable but unsleek.”
Apart from these three malicious cyber activities, the company has also blocked accounts used to operate fraud and impacts –
Networks originating from Cambodia, Myanmar and Nigeria have abused ChatGpt as part of an attempt to scam people online. These networks used AI to conduct translations, write messages, and create social media content to promote investment scams. It appears to be using ChatGPT to link to Chinese government agencies. It supports surveillance of individuals, including minority groups such as Uyghurs, and analyzes data from Western or Chinese social media platforms. Users asked tools to generate promotional material for such tools, but did not implement them using AI chatbots. The Russian origin threat actors linking to stop the news may be run by marketing companies that use AI models (and others) to generate content and videos for sharing on social media sites. The generated content criticized the role of France and the United States in Africa and Russia on the continent. It also produced English content that promotes anti-Ukurein stories. The secret influence operation, born from China, was called the codename, which used the model to generate social media content critical of Philippine President Ferdinand Marcos, and created a post about political figures and activists involved in Vietnam’s environmental impact in the South China Sea and Hong Kong’s pro-democracy movement.
In two different cases, on suspicion of Chinese accounts, ChatGpt asked to identify the source of the Mongolian petition to the organizers of the Mongolian petition and identify the source of funding for the X account that criticized the Chinese government. Openai said that the model only returns information published as an answer and does not contain sensitive information.
“This has a novel purpose [China-linked influence network was requests for advice on social media growth strategies, including how to start a TikTok challenge and get others to post content about the #MyImmigrantStory hashtag (a widely used hashtag of long standing whose popularity the operation likely strove to leverage),” OpenAI said.
“They asked our model to ideate, then generate a transcript for a TikTok post, in addition to providing recommendations for background music and pictures to accompany the post.”

OpenAI reiterated that its tools provided the threat actors with novel capabilities that they could not otherwise have obtained from multiple publicly available resources online, and that they were used to provide incremental efficiency to their existing workflows.
But one of the most interesting takeaways from the report is that threat actors are trying to adapt their tactics to remove possible signs that could indicate that the content was generated by an AI tool.
“One of the scam networks [from Cambodia] We asked the model to remove the EM dash (long dash, -) from the output, or to make it appear that it was manually removed before publication.
The Openai findings occurred when rival humanity released an open source audit tool called Petri (short for “parallel exploration tool for risky interactions”), accelerated AI safety research and better understand model behavior across a variety of categories, including deception, psychofancy, encouraging user delusions, collaborating with harmful demands, and self-oriented.
“Petri will deploy automated agents that test target AI systems through a variety of multi-turn conversations, including simulated users and tools,” Anthropic said.
“The researcher provides Petri with a list of seed instructions targeting the scenarios and behaviors they want to test. Petri operates each seed instruction in parallel. For each seed instruction, the auditor agent interacts with the target model in a tool use loop.
Source link