
Openai on Friday revealed that it had banned a series of accounts using the ChatGPT tool to develop a surveillance tool suspected of artificial intelligence.
The social media listening tool is said to be likely to originate from China and is driven by one of Meta’s Llama models. The account in question uses the AI company model to generate detailed descriptions and analyze documents about devices that can collect real-time data. It shares reports and insights with Chinese authorities on anti-China protests in the West.
The campaign “promotes monitoring tools and how networks work in reviews” has been pointed out by researchers Bennimo, Albert Chan, Matthew Richard and Nathaniel Hartley, and has taken and analyzed posts and comments from the platform. Because it is designed, it is called peer review. X, Facebook, YouTube, Instagram, Telegram, Reddit, etc.
In one example, flagged by the company, the actor used chatGPT to debug and modify the source code that is believed to run monitoring software called “Qianyue Overseas Public Opinion ai Ai Ai Ai Assistant.”
In addition to using that model as a research tool to surface publicly available information about US think tanks, and to surface government officials and politicians in Australia, Cambodia, and the United States, clusters have made ChatGPT access is a must I know I can use it to read. Translated and analyze screenshots of English documents.

Some images are announcements of Uyghur rights protests in various western cities and may have been copied from social media. It is currently unknown whether these images are genuine.
Openai also said it confused several other clusters that were found to be abused ChatGpt due to various malicious activities –
Deceptive Employment Scheme – Networks from North Korea are linked to fraudulent IT workers schemes involved in creating personal documents for fictional job seekers such as resumes, online job profiles, and cover letters. Masu. Abnormal behavior such as avoiding video calls, accessing corporate systems from unauthorized countries, or irregular hours work. Some of the fake job applications were then shared on LinkedIn. Sponsored Frustration – The Chinese origin network involved in creating social media content in English has been critical of the US and has since been published by and subsequently published by Latin American news websites in Mexico and Peru. Ecuador. Some activities overlap with spam flages known as known activity clusters. Romance-Baiting Scam – Accounts involved in translation and generation of Japanese, Chinese and English comments to post to social media platforms such as Facebook, X, Instagram related to the romance of Cambodia attacks and alleged investment fraud network. Iranian Impact Nexus – a network of five accounts involved in the generation of articles that are pro-Palestinian, pro-Hama, Pruiran, and anti-Israel and anti-Iranian, shared on websites related to Iranian Impact operations were tracked as International Virtual Media Union (IUVM) and Storm-2035. One of the prohibited accounts was used to create content for both operations indicating “relationships that were previously unreported.” Kimsuky and Bluenoroff – A network of accounts run by North Korean threat actors involved in gathering information related to cyber intrusion tools and cryptocurrency-related topics, and Remote Desktop Protocol (RDP) debug code brute force attack youth initiative Influence operations that affect operations – Network of accounts involved in creating English articles for websites named “Empowering Ghana” and Social Media Comments targeting Ghana Presidential Election Task fraud – Originating from Cambodia The network of accounts was involved in the translation of Urdu and English comments as part of a scam that seduces unsuspecting people into tasks performing simple tasks (e.g., videos like or writing) Write a review) In exchange for winning a nonexistent committee, victims need to let go of their money.
This development has been increasingly used by bad actors with AI tools increasingly promoting cyber-enabled disinformation campaigns and other malicious operations.

Last month, Google Threat Intelligence Group (GTIG) has been making multiple phases of the attack cycle, with more than 57 different threat actors with ties to China, Iran, North Korea and Russia using Gemini AI chatbots. , revealed that local events will be researched. Create, translate, and localize content.
“The unique insights that AI companies can gather from threat actors are especially valuable when shared with upstream providers, such as hosting and software developers, social media companies, and open source researchers. There is,” Openai said.
“Similarly, insights that upstream and downstream providers and researchers have in threat actors open up new avenues for detection and enforcement of AI companies.”
Source link