
Meta on Thursday revealed that it disrupted three secret impact operations that emerged from Iran, China and Romania in the first quarter of 2025.
The social media giant said in a quarterly hostile threat report that “we detected and deleted these campaigns before they could build a real audience on the app.”
This included a network of 658 Facebook accounts, 14 pages and two Instagram accounts, targeting Romania across several platforms, including Meta’s services, Tiktok, X and YouTube. One of the pages in question had around 18,300 followers.
The threat actors behind the activity used fake accounts to manage their Facebook pages, point users to websites outside the platform, and post comments on posts by politicians and news entities. The account has posted content related to sports, travel and local news under the guise of locals living in Romania.

While most of these comments were not involved with real viewers, Meta said these fictional personas also had a presence that corresponds to other platforms.
“The campaign demonstrated consistent operational security (OPSEC) to hide its origins and coordination, including relying on proxy IP infrastructure,” the company said. “The people behind this effort have posted news and current events, mainly in Romania, including elections in Romania.”
The second network of influence, destroyed by the meta, targeted platform X and YouTube, which originated from Iran and targeted Azeri-speaking audiences in Azerbaijani and Turkish. It consisted of 17 accounts on Facebook, 22 Facebook pages and 21 accounts on Instagram.
Forged accounts created through the operation were used to post content within groups, page management, and comment on network content, to artificially inflate the popularity of network content. Many of these accounts were stanced as female journalists and pro-Palestinian activists.
“In this operation, popular hashtags like #Palestine, #Gaza, #StarBucks and #instagram also used #instagram in their posts as part of their spam tactics trying to insert them into existing public discourse,” Meta said.
“Operators have posted news and current events to Azeri, including the Paris Olympics, Israel’s 2024 poehur attack, the boycott of American brands, US President Biden and Israel’s actions in Gaza.”
This activity is attributed to a known cluster of threat activity called Storm-2035. This was described in August 2024 as an Iranian network targeting US voters groups with a “polarizing message” on presidential candidates, LGBTQ rights and the Israeli Hama conflict.
Meanwhile, artificial intelligence (AI) company Openai has also revealed that it will ban ChatGpt accounts created by Storm-2035 and weaponize chatbots to generate content to share on social media.

Finally, Meta revealed that it had deleted 157 Facebook accounts, 19 pages, one group and 17 accounts on Instagram, targeting viewers of Myanmar, Taiwan and Japan. It is known that the threat actors behind the operation use AI to create profile photos and run “account farms” to spin up new fake accounts.
The Chinese activity included three separate clusters, each reposting news and current events in countries where they targeted other users and their content in English, Burma, Mandarin and Japanese.
“In Myanmar, they posted about the need to end the ongoing conflict, criticised civil resistance and shared supportive commentary on the junta,” the company said.
“In Japan, the campaign has run a page that criticized the Japanese government’s military ties with the United States, claiming Taiwan’s politicians and military leaders are corrupt and claiming that they would view posts submitted anonymously.
Source link