Deep fake AI services on Telegram pose risk for elections


Deep fake technology distributed through the messaging service Telegram and other platforms is likely to unleash a torrent of artificial intelligence (AI)-generated disinformation as three billion people gear up to vote in elections across the world over the next two years.

Security analysts have identified more than 400 channels promoting deep fake services on the Telegram Messenger app, ranging from automated bots that help users create deep fake videos to individuals offering to create bespoke fake videos.

More than 3,000 repositories of deep fake technology have also been identified on the largest web-based platform for hosting code and collaborative projects, GitHub, according to research by security company Check Point.

Check Point’s chief technologist, Oded Vanunu, told Computer Weekly: “This is a time to raise the flag and say there are going to be billions of people voting in elections and there is a huge deep fake infrastructure and it’s growing.”

Deep fake services leave no digital traces. This makes it difficult to control deep fakes through technological measures such as blacklisting IP addresses or identifying their digital signatures, he said.

Deep fake tools, combined with networks of bots, fake accounts and anonymised services, have created an “ecosystem of deception”.

Prices for deep fakes range from as low as $2 a video to $100 for a series, making it possible for bad actors to create deceptive content – which could be used to influence or destabilise elections due to be held in the world’s most populated countries over the next 12 months – cheaply in high volumes.

Vanunu said it was possible to use AI services advertised on GitHub and Telegram to create a deep fake video from a photograph or a deep fake audio track from a snippet of someone’s voice.

Further concerns about election manipulation were raised when OpenAI released its Sora video AI in February. The software can create photo-realistic videos from text prompts, making the production of high-quality videos extremely easy.

Voice cloning

Voice cloning technology – which can learn the characteristics of a target’s voice, including pitch, tone and speaking style, to make fake audio clips – also poses a risk.

Telegram screenshot showing ad for creation of deep fakes

Audio deep fakes are significantly easier to produce than video deep fakes, which require complex manipulation of visual data.

Deep fake audio clips of Labour leader Keir Starmer were circulated in the UK just before the Labour Party conference in October 2023. They purported to demonstrate the party leader abusing staff and saying he hated Liverpool, which hosted the Labour Party conference.

And in January 2024, voters in New Hampshire received phone calls from a deep fake audio version of president Joe Biden. In an apparent attempt to deter people from voting in the New Hampshire primaries, the fake Biden recording urged voters to stay at home.

Voice cloning is cheap, with prices starting at $10 a month, rising to several hundred dollars a month for more advanced features. The cost of AI-generated voice is as low as $0.006 per second.

Personalised disinformation

Deryck Mitchelson, global chief information security officer at Check Point and a cyber security advisor to the Scottish government, said AI would be used increasingly to target people with fake news tailored to their profiles and personal interests gleaned from social media posts.

AI would allow people trying to influence elections to go further by targeting an individual’s connections on social media with disinformation and misinformation.

This means that targets would be both directly influenced by posts they receive in their social media feeds and indirectly influenced by the people they know sharing deceptive content.

“Could AI steal the elections? No, I don’t think it will. Could it influence elections? There is no doubt all the elections we’ve got this year across the world will be influenced by fake news,” said Mitchelson.

AI could destabilise governments

The World Economic Forum (WEF) in Davos warned in January that AI-generated disinformation and misinformation is one of the top short-term risks facing economies.

“There is no doubt all the elections we’ve got this year across the world will be influenced by fake news”
Deryck Mitchelson, Check Point

Carolina Klint, chief commercial officer for Europe at Marsh McLennan and a contributor to the WEF’s Global risks report 2024, told Computer Weekly that the spread of disinformation and misinformation could destabilise legitimately elected governments.

“People are starting to realise that there is a lot of influence here, a lot of swaying of public opinion, a lot of pull and push in terms of what voters decide to go for,” she said. “And once the results are collected, it will be a legitimate claim that this government was not elected fairly.”

Detecting deep shams

The US Federal Communications Commission banned AI-generated robotic calls in February 2024 in the wake of the deep fake Biden recording used in New Hampshire.

However, identifying deep fakes is difficult. According to Vanunu, deep fake services offered on Telegram are designed to operate in enclosed “sandboxes” that do not leave traceable digital fingerprints.

Once a deep fake is uploaded on a social media platform, such as Facebook or Instagram, the platform wraps the deep fake in its own computer code. This makes it impossible to reverse engineer and decode the original metadata that would indicate its origin and legitimacy, he said.

In practice, this means social media services will be, whether they want to or not, the frontline defence against deep fakes.

“The platforms take the deep fake and accelerate it so that it can reach millions or thousands of millions of people in a timely manner, so the bottleneck from a technology point of view is the platform,” said Vanunu.

Digital watermarks

Mandating digital watermarks to identify AI-generated content is one solution, but not a foolproof one, as it is possible to remove digital watermarks by compressing a deep fake image.

Technology companies will need to work with law enforcement and cyber security experts to develop ways to detect and neutralise deep fakes, said Michelson.

In the mean time, governments and technology companies will need to educate people, through public awareness campaigns, with the skills to identify deep fakes and attempts at disinformation.



Source link