Researchers have warned that swarms of artificial intelligence (AI) agents could soon infiltrate social media platforms en masse, spreading false stories, harassing users, and undermining democracy.
These “AI swarms” form part of a new frontier in information warfare, capable of mimicking human behavior and evading detection while creating the illusion of genuine online movement, according to a January 22 commentary published in the journal Science.
you may like
“Humans are generally conformists,” commentary co-author Jonas Kunst, a communication professor at Norway’s BI Norwegian Business School, told Live Science. “We often don’t want to agree with that, and people have some differences, but all things being equal, we tend to believe that there is some value in what most people do. That can be hijacked relatively easily by these herds.”
And, researchers argued, if you don’t get caught up in the crowd, the crowd can also serve as a harassment tool to block debate that undermines the AI story. For example, this swarm could mimic an angry mob, targeting individuals with opposing views and forcing them off the platform.
The researchers haven’t revealed the timeline for the AI swarm’s invasion, so it’s unclear when the first agents will arrive in our feeds. However, they noted that swarms are difficult to detect, so the extent to which they have already been deployed is unknown. For many, the signs of the growing influence of bots on social media are already clear, while the dead internet conspiracy theory that bots are responsible for a large portion of online activity and content creation has gained traction in recent years.
shepherd the flock
Researchers warn that the risks of emerging AI constellations are compounded by long-standing vulnerabilities in digital ecosystems. This vulnerability is already weakened by what they describe as “the erosion of rational and critical debate and the lack of a shared reality among the population.”
Anyone who uses social media knows that it has become a very divisive place. The online ecosystem is already littered with automated bots—non-human accounts that follow the commands of computer software, which accounts for more than half of all web traffic. Traditional bots are typically only able to perform simple tasks over and over again, such as posting the same inflammatory message. They can still cause harm by spreading misinformation or exaggerating false narratives, but they are usually very easy to detect and rely on humans to coordinate at scale.
Meanwhile, the next generation of AI swarms will be orchestrated by large-scale language models (LLMs), the AI systems behind popular chatbots. According to the commentary, with LLM in charge, the swarm will become sophisticated enough to adapt to the online communities it invades, installing a collection of different personas that retain their memories and identities.
“We’re talking about it as a type of organism that is self-sufficient, able to self-regulate, learn, and adapt over time, thereby specializing in exploiting human vulnerabilities,” Kunst said.
you may like
This mass manipulation is far from hypothetical. Last year, Reddit threatened legal action against researchers who used an AI chatbot to manipulate the opinions of 4 million users on the popular forum r/changemyview. According to the researchers’ preliminary findings, the chatbot’s responses were three to six times more persuasive than those of human users.
A swarm can contain hundreds, thousands, or even a million AI agents. Kunst noted that number will scale with computing power and will also be limited by any restrictions social media companies may put in place to combat the swarm.
But the number of agents isn’t everything. Swarms may target local community groups suspicious of the sudden influx of new users. In this scenario, only a small number of agents are deployed. The researchers also noted that the swarms are more sophisticated than traditional bots, allowing them to have more impact with fewer bots.
“I think the more sophisticated these bots become, the fewer bots we actually need,” Daniel Schroeder, a researcher at the Norwegian technology research organization SINTEF and lead author of the commentary, told Live Science.
Next generation bot protection
Agents have an advantage in discussions with real users because they can post 24 hours a day, every day, no matter how long it takes for their story to catch on. The researchers added that AI’s persistence and persistence could be used as a weapon against limited human efforts in “cognitive warfare.”
As social media companies want real users, not AI agents, on their platforms, researchers predict that companies will strengthen account authentication to accommodate the AI swarms and force users to prove they are real humans. But the researchers also pointed to some problems with this approach, arguing that political dissent could be suppressed in countries where people rely on anonymity to speak out against their governments. Real accounts can also be hijacked or captured, further complicating matters. Still, the researchers noted that tightening authentication would make authentication more difficult and costly for those wishing to deploy AI swarms.
The researchers also suggest other swarm prevention measures, such as scanning live traffic for statistically anomalous patterns that could represent AI swarms, and establishing an “AI impact observatory” ecosystem where academic societies, NGOs, and other institutions can study, raise awareness, and respond to AI swarm threats. Essentially, researchers want to get ahead of the problem before it disrupts elections or other major events.
“We are warning with a fair degree of certainty about future developments that could indeed have disproportionate consequences for democracy, and we need to start preparing for them,” Kunst said. “Rather than waiting for the first type of large-scale event to be negatively impacted by AI swarms, we need to be proactive.”
Source link
