AI-Powered Deception is a Menace to Our Societies


Wherever there’s been conflict in the world, propaganda has never been far away. Travel back in time to 515 BC and read the Behistun Inscription, an autobiography by Persian King Darius that discusses his rise to power. More recently, see how different newspapers report on wars, where it’s said, ‘The first casualty is the truth.’

While these forms of communication could shape people’s beliefs, they also carry limitations around scalability. Any messaging and propaganda would often lose its power after traveling a certain distance. Of course, with social media and the online world there are few physical limits on reach, apart from where someone’s internet connection drops. Add in the rise of AI, and there’s also nothing to stop the scalability either.

This article explores what this means for societies and organizations facing AI-powered information manipulation and deception.

The rise of the echo chamber

According to the Pew Research Center, around one-in-five Americans get their news from social media. In Europe, there’s been an 11% rise in people using social media platforms to access news. AI algorithms are at the heart of this behavioral shift. However, they aren’t compelled to present both sides of a story, in the way that journalists are trained to, and that media regulators require. With fewer restrictions, social media platforms can focus on serving up content that their users like, want, and react to.

This focus on maintaining eyeballs can lead to a digital echo chamber, and potentially polarized viewpoints. For example, people can block opinions they disagree with, while the algorithm automatically adjusts user feeds, even monitoring scrolling speed, to boost consumption. If consumers only see content that they agree with, they’re reaching a consensus with what AI is showing them, but not the wider world.

What’s more, more of that content is now being generated synthetically using AI tools. This includes over 1,150 unreliable AI-generated news websites recently identified by NewsGuard, a company specializing in information reliability. With few limitations to AI’s output capability, long-standing political processes are feeling the impact.

How AI is being deployed for deception

It’s fair to say that we humans are unpredictable. Our multiple biases and countless contradictions play out in each of our brains constantly. Where billions of neurons make new connections that shape realities and in turn, our opinions. When malicious actors add AI to this potent mix, this leads to events such as:

  • Deepfake videos spreading during the US election: AI tools allow cybercriminals to create fake footage, featuring people moving and talking, using just text prompts. The high levels of ease and speed mean no technical expertise is needed to create realistic AI-powered footage. This democratization threatens democratic processes, as shown in the run-up to the recent US election. Microsoft highlighted activity from China and Russia, where ‘threat actors were observed integrating generative AI into their US election influence efforts.’
  • Voice cloning and what political figures say: Attackers can now use AI to copy anyone’s voice, simply by processing a few seconds of their speech. That’s what happened to a Slovakian politician in 2023. A fake audio recording spread online, supposedly featuring Michal Simecka discussing with a journalist how to fix an upcoming election. While the discussion was soon found to be fake, this all happened just a few days before polling began. Some voters may have cast their vote while believing the AI video was genuine.
  • LLMs faking public sentiment: Adversaries can now communicate as many languages as their chosen LLM, and at any scale too. Back in 2020, an early LLM, GPT-3, was trained to write thousands of emails to US state legislators. These advocated a mix of issues from the left and right of the political spectrum. About 35,000 emails were sent, a mix of human-written and AI-written. Legislator response rates ‘were statistically indistinguishable’ on three issues raised.

AI’s impact on democratic processes

It’s still possible to identify many AI-powered deceptions. Whether that’s from a glitchy frame in a video, or a mispronounced word in a speech. However, as technology progresses, it’s going to become harder, even impossible to separate fact from fiction.

Fact-checkers may be able to attach follow-ups to fake social media posts. Websites such as Snopes can continue debunking conspiracy theories. However, there’s no way to make sure these get seen by everyone who saw the original posts. It’s also pretty much impossible to find the original source of fake material, due to the number of distribution channels available.

Pace of evolution

Seeing (or hearing) is believing. I’ll believe it when I see it. Show me, don’t tell me. All these phrases are based on human’s evolutionary understanding of the world. Namely, that we choose to trust our eyes and ears.

These senses have evolved over hundreds, even millions of years. Whereas ChatGPT was released publicly in November 2022. Our brains can’t adapt at the speed of AI, so if people can no longer trust what’s in front of them, it’s time to educate everyone’s eyes, ears, and minds.

Otherwise, this leaves organizations wide open to attack. After all, work is often where people spend most time at a computer. This means equipping workforces with awareness, knowledge, and skepticism when faced with content engineered to generate action. Whether that contains political messaging at election time, or asking an employee to bypass procedures and make a payment to an unverified bank account.

It means making societies aware of the many ways malicious actors play on natural biases, emotions, and instincts to believe what someone is saying. These play out in multiple social engineering attacks, including phishing (‘the number one internet crime type’ according to the FBI).

And it means supporting individuals to know when to pause, reflect, and challenge what they see online. One way is to simulate an AI-powered attack, so they gain first-hand experience of how it feels and what to look out for. Humans shape society, they just need help to defend themselves, organizations, and communities against AI-powered deception.


Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.





Source link