Bot farms have moved into the center of information warfare, using automated accounts to manipulate public opinion, influence elections, and weaken trust in institutions.
Algorithms reward noise over truth
Thales reports that in 2024, automated bot traffic made up 51% of all web traffic, the first time in a decade it has surpassed human activity online.
As bots become more common and harder to tell from real users, people start to lose confidence in what they see online. This creates the liars dividend, where even authentic content is questioned simply because everyone knows fakes are out there. If any critical voice or inconvenient fact can be dismissed as just a bot or a deepfake, democratic debate takes a hit.
AI-driven bots can also create the illusion of consensus. By making a hashtag or viewpoint trend, they create the impression that everyone is talking about it, or that an extreme position enjoys broader support than it appears to have.
Because bots can generate content far faster than people, they can flood online spaces with sheer volume, pushing genuine conversations to the margins.
Often state-sponsored by countries like Russia, China, or Iran, these farms use either racks of smartphones controlled from a single computer or software-based systems to mimic human activity on platforms such as X, Facebook, and Instagram.
Before the UK vote, 45 bot-like accounts on X spread divisive political content. They posted about 440,000 times, reaching over 3 billion views, then added another 170,000 posts and 1.3 billion views after the election.
Even when people are asked to spot bots directly, the results are worrying. In one experiment on Mastodon, participants tried to identify AI bots in political discussions but were wrong 58% of the time.
The global reach of Russia’s bot farms
In recent years, Russia has led disinformation campaigns aimed at weakening democratic processes, from targeting US political movements to interfering in European elections.
In September 2024, Microsoft uncovered a Russian disinformation campaign that spread a false story accusing Kamala Harris of a 2011 hit-and-run. The group Storm-1516 made a video with an actor playing the victim and posted it on a site designed to look like a San Francisco TV station, KBSF-TV. The clip drew more than 2.7 million views and was boosted by pro-Russian networks on social media.
Similar tactics appeared in 2016 when Russian bot farms posed as American activists and amplified content that favored Donald Trump, reaching millions of voters on Facebook and Twitter.
The same tactics have appeared in Europe. Ahead of Germany’s recent election, Russian-linked bots circulated fake videos and pseudo-media to distort debate, which German authorities flagged as a coordinated interference effort.
Research shows Russia is using bot networks on Telegram to influence people in occupied parts of Ukraine. Instead of running their own channels, these bots slip into local conversations, spreading pro-Russian messages that praise daily life under occupation and cast doubt on Ukraine’s legitimacy. The goal is to make propaganda feel like it’s coming from ordinary residents.
The U.S. Department of Justice seized two domains used by Russian operators to run a bot farm built on Meliorator, an AI-driven tool designed to create fake social media personas. The accounts, many posing as Americans, pushed messages with text, images, and video that supported Kremlin objectives. Nearly 1,000 accounts linked to the operation have been suspended on X. While the farm primarily targeted X, investigators believe Meliorator can be adapted for other platforms.
Harmful bots keep outpacing platform defenses
It’s still an open question how well online platforms stop malicious, bot-driven content, even though they are the ones responsible for policing their own networks.
Harmful AI bots continue to get through the defenses of major social media platforms. Even though most have rules against automated manipulation, enforcement is weak and bots exploit the gaps to spread disinformation. Current detection systems and policies aren’t keeping up, and platforms will need stronger measures to address the problem.
X, the social media platform owned by Elon Musk, is facing the first penalties under the European Union’s Digital Services Act. The EU says its verification system diverged from industry norms and was exploited by malicious actors to mislead users.
Solving the problem requires more than deleting fake accounts. Experts call for cooperation between policymakers and tech companies, better digital literacy education, and broader public awareness. Users themselves also need to be more cautious about the information they consume and share.
Using AI to counter disinformation
AI has fueled the spread of disinformation campaigns and is now also being used to counter them. Governments, tech companies, and civil society groups are using AI tools to detect, verify, and remove false content. It also helps identify coordinated inauthentic behavior.
Platforms use graph analysis to spot clusters of accounts with unusual patterns, such as new profiles with AI-generated avatars engaging in synchronized posting.
According to Arkose Labs, enterprises are investing heavily in AI-powered solutions, which account for 21% of cybersecurity budgets today and are projected to reach 27% by 2026.
Global policy initiatives
The EU and the US are both moving to address bot-driven disinformation. In the EU, the Digital Services Act obliges large online platforms to assess and mitigate systemic risks such as manipulation, and to provide vetted researchers with access to platform data.
The new AI Act introduces transparency rules for generative AI, requiring that AI-generated content be clearly identifiable.The US has no single federal law; instead, agencies such as the DOJ and CISA pursue foreign bot farms, while states like California have passed bot disclosure laws.
Beyond the US and EU, international bodies are also active. NATO treats online influence operations as a security risk and works with allies to build resilience. The UN has debated AI governance and information integrity, while the G7 has committed to countering foreign information manipulation. Together these steps show that bot farms are being recognized as a global challenge.