Friday, April 17, 2026
Google search engine
HomeTechnologyResearch warns that AI agents can be a self-producing propaganda machine

Research warns that AI agents can be a self-producing propaganda machine

A new study from the University of Southern California warns that AI programs can now carry out propaganda campaigns without human intervention.

The study asks us to imagine a scenario in which, two weeks before a major election, thousands of posts flood X, Reddit and Facebook, all spreading the same narrative and reinforcing each other. It might seem like an organic, man-made movement. Instead, it is a group of AI agents who run the entire campaign.

This is not a hypothesis. This is the key finding of a new paper accepted for publication at the 2026 Web Conference by researchers at USC’s Information Sciences Institute.

The findings highlight serious concerns about how bad actors could weaponize AI to flood the internet with misinformation and manipulate public opinion.

How did researchers come to this conclusion?

The researchers created a simulated X-like environment with 50 AI agents, with 10 agents acting as influencers and 40 acting as regular users. Of the 40 regular agents, 20 agents had the same views as the influencers, while the other 20 held views that were against the campaign. The researchers created their simulation using the PyAutogen library and ran it on the Llama 3.3 70B model.

The operators were then tasked with promoting a fictitious candidate with the aim of making the campaign hashtag viral. What followed was disturbing. The bots didn’t just follow a script. They wrote their own posts, learned what worked, and copied each other’s successful content.

An AI agent literally wrote that he wanted to retweet a teammate’s post because he was already interested in it. The researchers later increased the number of AI agents to 500 and found that the results were consistent with their findings.

Lead scientist Luca Luceri put it bluntly: “Our work shows that this is not a future threat. It is already technically possible.”

What makes these bots harder to catch?

Traditional bots are predictable. They post the same content, use the same hashtags, and follow the same patterns. It seems like they all follow the same script, making them easy to spot.

AI-powered bots are different. Because these LLM-powered bots can create their own content, each post is slightly different and coordination occurs beneath the surface, making the conversations feel real. The result is a disinformation campaign that can operate autonomously and with minimal human intervention.

The most worrying finding was that simply telling the bots who their teammates were led to almost as much coordination as active joint planning.

The threat does not stop at elections. Luceri warns that the same playbook could be applied to public health, immigration and economic policy, anywhere where an established consensus can shift public opinion.

Is there anything we can do to stop it?

Such campaigns are difficult for individual users to detect and stop. The researchers put the onus on platforms to stop such coordinated misinformation campaigns by looking beyond individual posts and focusing on how accounts behave together.

According to the researchers, coordinated resharing, rapid mutual reinforcement, and converging narratives are all recognizable signals, even when the content appears genuine.

Honestly, AI has brought us into a new world, and it’s going to get a lot darker before it can get better.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments