China ramps up use of AI misinformation


The Chinese government is ramping up its use of content generated by artificial intelligence (AI) as it seeks to further its geopolitical goals through misinformation, including targeting the upcoming US presidential election, according to intelligence published by the Microsoft Threat Analysis Centre (MTAC).

MTAC said Beijing was also using fake social media accounts to pose contentious questions on some of the big domestic issues – such as immigration from Latin America and the 2022 repeal of Roe vs. Wade which gutted abortion rights in the US – that are likely to decide this year’s election. It is likely seeking both to understand some of the issues that are dividing Americans and gain intelligence to better manipulate various voting blocs.

“MTAC previously reported in September 2023 how CCP [Chinese Communist Party]-affiliated social media accounts had begun to impersonate US voters. This activity has continued and these accounts nearly exclusively post about divisive US domestic issues such as global warming, US border policies, drug use, immigration, and racial tensions,” wrote MTAC general manager Clint Watts.

“They use original videos, memes, and infographics as well as recycled content from other high-profile political accounts. In recent months, there has been an increase in, effectively, polling questions. This indicates a deliberate effort to understand better which US voter demographic supports what issue or position and which topics are the most divisive, ahead of the main phase of the US presidential election.”

As part of this, MTAC has also observed China increasing its use of misleading AI-generated content – something that security experts had predicted would be a big early-stage malicious use case for AI.

This content exploited a range of topics, including the November 2023 derailment of a train in Kentucky which sparked a chemical fire, the summer 2023 wildfires on the island of Maui, Hawai’i, and other major trends and stories.

Often, the narrative pushed by the Chinese operators supported false flag conspiracy theories beloved of the far right – including that the Maui fire which killed 101 people and destroyed the town of Lāhainā, was the result of the US government testing a prototype space laser weapon. MTAC has, however, found little evidence that these efforts have done much to change public opinion.

Spamouflage

Much of this activity has been attributed to a threat actor tracked by Microsoft as Storm-1376, which is also known by others as Spamouflage and Dragonbridge.

This group is not, however, merely targeting the US – MTAC has also observed it operating closer to home in the Asia-Pacific region, notably in Taiwan, where the 13 January 2024 presidential election was what may be the first documented instance of the use of AI content to directly influence a country’s political process.

On 13 January, Spamouflage posted audio clips to YouTube of independent candidate Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

They also exploited the likeness of a well-known, Canada-based Chinese dissident to harass and influence Canadian MPs, and used AI to generate memes portraying Taiwan’s Democratic Progressive Party candidate William Lai in a negative light.

But it is not just Chinese threat actors incorporating AI into their playbooks – although isolated and impoverished, those working for the North Korean government are also learning how to use large language models (LLMs) to enhance the efficiency and effectiveness of their operations.

MTAC said it had observed a group tracked as Emerald Sleet using ChatGPT to craft spear-phishing campaigns that targeted experts on Korean affairs, to research vulnerabilities that might be of use, and to conduct reconnaissance on organisations focused on North Korea.

Emerald Sleet has also been observed using LLMs to troubleshoot technical problems, write basic scripts, and draft other content. Its activities were shut down earlier in 2024 by OpenAI.

The full report features more detail on more traditional Chinese threat actors – such as the notorious Volt Typhoon – which targets critical infrastructure and government agencies in the West for cyber intrusions, and North Korea’s cryptocurrency-focused hackers, whose goals centre on propping up Kim Jong-Un’s government through financial crime. The MTAC report can be downloaded here.

Looking ahead, Watts said that with billions of people living in countries of interest to China, including India, South Korea, the US and potentially the UK, heading to the polls in 2024, it is likely that both China and North Korea will actively try to target these elections.

“China will, at a minimum, create and amplify AI-generated content that benefits their positions in these high-profile elections. While the impact of such content in swaying audiences remains low, China’s increasing experimentation in augmenting memes, videos, and audio will continue – and prove effective down the line,” he wrote.

“While Chinese cyber actors have long conducted reconnaissance of US political institutions, we are prepared to see influence actors interact with Americans for engagement and to potentially research perspectives on US politics.”



Source link