A Year Of Elections And Misinformation


By Roman Faithfull, Cyber Intelligence Lead, Cyjax

2024 will see more elections than any other year in history: the UK, the US, Russia, India, Taiwan and more. According to AP, at least 40 countries will go to the polls this year, and some of these contests will have ramifications way beyond their national borders. This will also make 2024 a year of misinformation, as groups both within and outside these countries look to exert their influence on the democratic process.

As the US presidential election draws near, specialists caution that a combination of factors domestically and internationally, across conventional and digital media platforms, and amidst a backdrop of increasing authoritarianism, profound mistrust, and political and social turbulence, heightens the severity of the threats posed by propaganda, disinformation, and conspiracy theories.

There are two terms that are frequently conflated. Disinformation is deliberately false content crafted to inflict harm, whereas misinformation is inaccurate or deceptive content shared by individuals who genuinely believe it to be true. It can be difficult to establish if people are acting in good faith or not, so the terms are often used interchangeably—and misinformation often starts out as carefully crafted disinformation.

The overall outlook appears bleak, with governments already experiencing the effects of misinformation. The groundwork has been laid, evidenced by past initiatives that aimed to influence elections in favor of certain parties.

In 2022, the BBC launched an investigative project, creating fake accounts to follow the spread of misinformation on platforms such as Facebook, Twitter, and TikTok, and its potential political impact. Despite attempts by social media platforms to tackle this problem, it was found that false information, particularly from far-right viewpoints, remains prevalent.

Today, just two years on, the techniques and tools to manipulate information are even more advanced.

The Deceptive Side of Tech

AI is dominating every discussion of technology right now, as its uses are explored for good and ill. Spreading fake news and disinformation is one of those uses. In its 2024 Global Risks report, the World Economic Forum noted that the increasing worry regarding misinformation and disinformation primarily stems from the fear that AI, wielded by malicious individuals, could flood worldwide information networks with deceptive stories.

And last year, the UK’s Cyber Security Center released a report exploring the potential for nations like China and Russia to employ AI for voter manipulation and meddling in electoral processes.

Deepfakes have grabbed a lot of attention, but could they disrupt future elections? It’s not a future problem—we’re already here. Deepfake audio recordings mimicking Keir Starmer, the leader of the Labour Party, and Sadiq Khan, the mayor of London, have surfaced online.

The latter of these was designed to inflame tensions ahead of a day of protest in London. One of those responsible for sharing the clip apologized but added that they believed the mayor held beliefs similar to the fake audio. Even when proven false, deepfakes can remain effective in getting their message across.

Many would argue that the responsibility now falls on governments to implement measures ensuring the integrity of elections. It’s a cat and mouse game—and unfortunately, the cat is not exactly known for its swiftness.

There are myriad ways to exploit technology for electoral manipulation, and stopping all of it could simply be impossible. Regulation is out-of-date (the Computer Misuse Act was passed in 1990, though it has been updated a few times) and the wheels of government turn slowly. Creating and passing new laws is a long process involving consultation, amendment processes, and more.

But is it solely the responsibility of governments, or do others need to step up?.

Is There a Solution?

Combating technology with technology is essential, there is simply too much misinformation out there for people to sift through. Some of the biggest tech companies are taking steps: Two weeks ago, a coalition of 20 tech firms including Microsoft, Meta, Google, Amazon, IBM, Adobe and chip designer Arm announced a collective pledge to tackle AI-generated disinformation during this year’s elections, with a focus on combating deepfakes.

Is this reassuring? It’s good to know that big tech firms have this problem on their radar, but tough to know how effective their efforts can be. Right now, they are just agreeing on technical standards and detection mechanisms—starting the work of detecting deepfakes is some way away.

Also, while deepfakes are perhaps uniquely disturbing, they are just one method among many, they represent just a fraction of effective disinformation strategies. Sophistication is not always needed for fake news to spread—rumors can be spread on social media or apps like Telegraph, real photos can be put into new contexts and spread disinformation without clever editing, and even video game footage has been used to make claims about ongoing wars.

Fighting Misinformation During Election

Fighting against misinformation is extremely difficult, but it is possible. And the coalition of 20 big tech firms has the right idea—collaboration is vital.

Be proactive

A lie can travel halfway around the world while the truth is putting on its shoes, said… someone (it’s a quote attributed to many different people). By the time we react to disinformation, it’s already out there and debunking efforts are not always effective. As Brandolini’s Law states, the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

And often, when people read both the misinformation and the debunking, they only remember the lies. Warning people about what to look for in misinformation can help. Where did it originate? If it claims to be from an authoritative source, can you find the original? Is there a source at all?

Inoculate

Sander van der Linden, a professor of psychology and an expert on misinformation, recommends a similar approach to vaccinations—a weak dose of fake news to head off the incoming virus. By getting people to think about misinformation and evaluate it, and teaching people the tactics behind its creation, they can better deal with fake news stories they later encounter.

Could we create a vaccine program for fake news? Perhaps, but it requires a big effort and a lot of collaboration between different groups.

Monitor

It’s not only governments and public figures that are attacked by fake news, corporations and businesses can find themselves the target or unwitting bystanders. Telecom companies have been the subject of 5G conspiracy theories, and pharmaceutical companies accused of being part of, rather than helping solve, the pandemic. But the problem can get weirder.

A pizza restaurant in Washington DC and a furniture retailer have both had to react to being accused of child trafficking thanks to bizarre rumors circulating online. What are people saying about your business? Can you react before things get out of hand?

Misinformation works for a number of reasons—people want to know “the story behind the story”, and it gives people a feeling of control when they have access to “facts” others do not—which is why misinformation spreads so fast during a pandemic that took away that feeling of control from so many of us.

Those spreading misinformation know how to tap into these fears. In cybersecurity terms, they know the vulnerabilities and how to exploit them. We can’t distribute software patches to stop these attacks, but we can make them less effective by understanding them.

Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 



Source link