U.S. Seizes Domains Used by AI-Powered Russian Bot Farm for Disinformation


The U.S. Department of Justice (DoJ) said it seized two internet domains and searched nearly 1,000 social media accounts that Russian threat actors allegedly used to covertly spread pro-Kremlin disinformation in the country and abroad on a large scale.

“The social media bot farm used elements of AI to create fictitious social media profiles — often purporting to belong to individuals in the United States — which the operators then used to promote messages in support of Russian government objectives,” the DoJ said.

The bot network, comprising 968 accounts on X, is said to be part of an elaborate scheme hatched by an employee of Russian state-owned media outlet RT (formerly Russia Today), sponsored by the Kremlin, and aided by an officer of Russia’s Federal Security Service (FSB), who created and led an unnamed private intelligence organization.

The developmental efforts for the bot farm began in April 2022 when the individuals procured online infrastructure while anonymizing their identities and locations. The goal of the organization, per the DoJ, was to further Russian interests by spreading disinformation through fictitious online personas representing various nationalities.

The phony social media accounts were registered using private email servers that relied on two domains – mlrtr[.]com and otanmail[.]com – that were purchased from domain registrar Namecheap. X has since suspended the bot accounts for violating its terms of service.

The information operation — which targeted the U.S., Poland, Germany, the Netherlands, Spain, Ukraine, and Israel — was pulled off using an AI-powered software package dubbed Meliorator that facilitated the “en masse” creation and operation of said social media bot farm.

“Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel,” law enforcement agencies from Canada, the Netherlands, and the U.S. said.

Meliorator includes an administrator panel called Brigadir and a backend tool called Taras, which is used to control the authentic-appearing accounts, whose profile pictures and biographical information were generated using an open-source program called Faker.

Cybersecurity

Each of these accounts had a distinct identity or “soul” based on one of the three bot archetypes: Those that propagate political ideologies favorable to the Russian government, like already shared messaging by other bots, and perpetuate disinformation shared by both bot and non-bot accounts.

While the software package was only identified on X, further analysis has revealed the threat actors’ intentions to extend its functionality to cover other social media platforms.

Furthermore, the system slipped through X’s safeguards for verifying the authenticity of users by automatically copying one-time passcodes sent to the registered email addresses and assigning proxy IP addresses to AI-generated personas based on their assumed location.

“Bot persona accounts make obvious attempts to avoid bans for terms of service violations and avoid being noticed as bots by blending into the larger social media environment,” the agencies said. “Much like authentic accounts, these bots follow genuine accounts reflective of their political leanings and interests listed in their biography.”

“Farming is a beloved pastime for millions of Russians,” RT was quoted as saying to Bloomberg in response to the allegations, without directly refuting them.

The development marks the first time the U.S. has publicly pointed fingers at a foreign government for using AI in a foreign influence operation. No criminal charges have been made public in the case, but an investigation into the activity remains ongoing.

Doppelganger Lives On

In recent months Google, Meta, and OpenAI have warned that Russian disinformation operations, including those orchestrated by a network dubbed Doppelganger, have repeatedly leveraged their platforms to disseminate pro-Russian propaganda.

“The campaign is still active as well as the network and server infrastructure responsible for the content distribution,” Qurium and EU DisinfoLab said in a new report published Thursday.

“Astonishingly, Doppelganger does not operate from a hidden data center in a Vladivostok Fortress or from a remote military Bat cave but from newly created Russian providers operating inside the largest data centers in Europe. Doppelganger operates in close association with cybercriminal activities and affiliate advertisement networks.”

At the heart of the operation is a network of bulletproof hosting providers encompassing Aeza, Evil Empire, GIR, and TNSECURITY, which have also harbored command-and-control domains for different malware families like Stealc, Amadey, Agent Tesla, Glupteba, Raccoon Stealer, RisePro, RedLine Stealer, RevengeRAT, Lumma, Meduza, and Mystic.

Cybersecurity

What’s more, NewsGuard, which provides a host of tools to counter misinformation, recently found that popular AI chatbots are prone to repeating “fabricated narratives from state-affiliated sites masquerading as local news outlets in one third of their responses.”

Influence Operations from Iran and China

It also comes as the U.S. Office of the Director of National Intelligence (ODNI) said that Iran is “becoming increasingly aggressive in their foreign influence efforts, seeking to stoke discord and undermine confidence in our democratic institutions.”

The agency further noted that the Iranian actors continue to refine their cyber and influence activities, using social media platforms and issuing threats, and that they are amplifying pro-Gaza protests in the U.S. by posing as activists online.

Google, for its part, said it blocked in the first quarter of 2024 over 10,000 instances of Dragon Bridge (aka Spamouflage Dragon) activity, which is the name given to a spammy-yet-persistent influence network linked to China, across YouTube and Blogger that promoted narratives portraying the U.S. in a negative light as well as content related to the elections in Taiwan and the Israel-Hamas war targeting Chinese speakers.

In comparison, the tech giant disrupted no less than 50,000 such instances in 2022 and 65,000 more in 2023. In all, it has prevented over 175,000 instances to date during the network’s lifetime.

“Despite their continued profuse content production and the scale of their operations, DRAGONBRIDGE achieves practically no organic engagement from real viewers,” Threat Analysis Group (TAG) researcher Zak Butler said. “In the cases where DRAGONBRIDGE content did receive engagement, it was almost entirely inauthentic, coming from other DRAGONBRIDGE accounts and not from authentic users.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.





Source link