Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

At Starbase, SpaceX is doing its own firefighting.

Chinese hackers have been exploiting ArcGIS Server as a backdoor for over a year

FleetWorks raises $17 million to match truck drivers with freight faster

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Openai disrupts hackers in Russia, North Korea and China.
Identity

Openai disrupts hackers in Russia, North Korea and China.

userBy userOctober 8, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Openai said on Tuesday that it disrupted three activity clusters due to misuse of ChatGPT artificial intelligence (AI) tools to promote malware development.

This includes the threat actors in Russian. The Russian threat actor is said to have used chatbots to help develop and improve the Trojan horse (rat), a qualified thief intended to avoid detection. The operator also used several ChatGPT accounts to prototype and troubleshoot technical components that allow for post-explosion and credential theft.

“These accounts appear to be affiliated with Russian-speaking criminal groups because they observed them posting evidence of their activities on telegram channels dedicated to those actors,” Openai said.

The AI ​​company said that the large-scale language model (LLMS) rejected the threat actor’s direct request to create malicious content, but bypassed the restrictions by creating building block code, it was then assembled to create workflows.

Some of the generated output included code for obfuscation, clipboard monitoring, and basic utilities for removing data using telegram bots. It is worth pointing out that none of these outputs are inherently malicious.

“Threat actors created a combination of advanced and low refinement requests. Many prompts required deep window platform knowledge and iterative debugging, while other prompts automated product tasks (such as mass password generation and scripted recruitment applications),” added Openai.

“Operators used a small number of ChatGPT accounts and repeated the same code throughout the conversation. This is a pattern that matches ongoing development rather than occasional testing.”

The second cluster of activities, born from North Korea, shared a duplicate with a campaign detailed by Trellix in August 2025, targeting the delivery of Xeno Rat using spear photography emails in Korea.

DFIR Retainer Service

Openai said the cluster uses ChatGPT for malware and command and control (C2) development, while the actors are engaged in specific efforts such as developing MacOS Finder extensions, configuring Windows Server VPN, and converting Chrome extensions to Safari equivalents.

Additionally, threat actors have been found to use AI chatbots to draft phishing emails, experiment with cloud services and GitHub functions, and explore techniques that facilitate DLL loading, memory execution, Windows API hooks, and entitlement theft.

Openai has a third set of prohibited accounts shared with clusters that were tracked by Proofpoint under the name UNK_DROPPITCH (aka UTA0388).

The account used the tools to generate content for its phishing campaigns in English, Chinese and Japanese. It helps you with tools to accelerate everyday tasks such as remote execution using HTTPS and traffic protection. Find information related to installing open source tools such as Nuclei and FSCAN. Openai described the threat actor as “technically capable but unsleek.”

Apart from these three malicious cyber activities, the company has also blocked accounts used to operate fraud and impacts –

Networks originating from Cambodia, Myanmar and Nigeria have abused ChatGpt as part of an attempt to scam people online. These networks used AI to conduct translations, write messages, and create social media content to promote investment scams. It appears to be using ChatGPT to link to Chinese government agencies. It supports surveillance of individuals, including minority groups such as Uyghurs, and analyzes data from Western or Chinese social media platforms. Users asked tools to generate promotional material for such tools, but did not implement them using AI chatbots. The Russian origin threat actors linking to stop the news may be run by marketing companies that use AI models (and others) to generate content and videos for sharing on social media sites. The generated content criticized the role of France and the United States in Africa and Russia on the continent. It also produced English content that promotes anti-Ukurein stories. The secret influence operation, born from China, was called the codename, which used the model to generate social media content critical of Philippine President Ferdinand Marcos, and created a post about political figures and activists involved in Vietnam’s environmental impact in the South China Sea and Hong Kong’s pro-democracy movement.

In two different cases, on suspicion of Chinese accounts, ChatGpt asked to identify the source of the Mongolian petition to the organizers of the Mongolian petition and identify the source of funding for the X account that criticized the Chinese government. Openai said that the model only returns information published as an answer and does not contain sensitive information.

“This has a novel purpose [China-linked influence network was requests for advice on social media growth strategies, including how to start a TikTok challenge and get others to post content about the #MyImmigrantStory hashtag (a widely used hashtag of long standing whose popularity the operation likely strove to leverage),” OpenAI said.

“They asked our model to ideate, then generate a transcript for a TikTok post, in addition to providing recommendations for background music and pictures to accompany the post.”

CIS Build Kits

OpenAI reiterated that its tools provided the threat actors with novel capabilities that they could not otherwise have obtained from multiple publicly available resources online, and that they were used to provide incremental efficiency to their existing workflows.

But one of the most interesting takeaways from the report is that threat actors are trying to adapt their tactics to remove possible signs that could indicate that the content was generated by an AI tool.

“One of the scam networks [from Cambodia] We asked the model to remove the EM dash (long dash, -) from the output, or to make it appear that it was manually removed before publication.

The Openai findings occurred when rival humanity released an open source audit tool called Petri (short for “parallel exploration tool for risky interactions”), accelerated AI safety research and better understand model behavior across a variety of categories, including deception, psychofancy, encouraging user delusions, collaborating with harmful demands, and self-oriented.

“Petri will deploy automated agents that test target AI systems through a variety of multi-turn conversations, including simulated users and tools,” Anthropic said.

“The researcher provides Petri with a list of seed instructions targeting the scenarios and behaviors they want to test. Petri operates each seed instruction in parallel. For each seed instruction, the auditor agent interacts with the target model in a tool use loop.


Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSolving cybersecurity challenges with biometric solutions
Next Article How One Health Strategy Can Ensure Sustainable EU Agriculture
user
  • Website

Related Posts

Chinese hackers have been exploiting ArcGIS Server as a backdoor for over a year

October 14, 2025

How Threat Hunting Builds Readiness

October 14, 2025

A single 8-byte write shatters AMD’s SEV-SNP Confidential Computing security

October 14, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

At Starbase, SpaceX is doing its own firefighting.

Chinese hackers have been exploiting ArcGIS Server as a backdoor for over a year

FleetWorks raises $17 million to match truck drivers with freight faster

Aquawise unveils AI-powered water quality technology at TechCrunch Disrupt 2025

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Revolutionize Your Workflow: TwinH Automates Tasks Without Your Presence

FySelf’s TwinH Unlocks 6 Vertical Ecosystems: Your Smart Digital Double for Every Aspect of Life

Beyond the Algorithm: How FySelf’s TwinH and Reinforcement Learning are Reshaping Future Education

Meet Your Digital Double: FySelf Unveils TwinH, the Future of Personalized Online Identity

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.