2024 election security: Confronting disinformation and deepfakes


The propagation of deliberately false information has been a growing threat over the past few years, with the volume of disinformation increasing, catalysed by an accumulation of sophisticated actors and well-orchestrated campaigns.

In a year in which more than half of the world’s population is set to head to the polls, there’s much to be made of the concerns around the impact of disinformation on electoral integrity, as the power of disinformation is magnified in the era of generative AI. 

The threat of cyber-attacks can, at times, feel relentless, with most recent news that Chinese state-affiliated hackers were responsible for cyber campaigns that targeted British lawmakers and the UK’s electoral watchdog, compromising millions of people’s data.

Whilst there has been an increasing focus, and fear, at the potential effects of disinformation and, in particular, how it can be accelerated and amplified by technology – we already have lots of practices and approaches in place to counter such emerging threats. The opportunity, therefore, is to adapt these and scale them more effectively to help counter this growing challenge.

The challenge isn’t necessarily identifying or detecting disinformation, but understanding the degree to which its influence has already been sown. We cannot, therefore, be complacent in the face of this growing challenge and must seriously consider solutions as a necessity.

A joined-up holistic approach 

We need to consider disinformation as part of the wider cyber threat picture and stop treating them as two separate issues. Operationally, cyber and influence converges – attackers are increasingly blending these techniques and using joined-up skills and capabilities to execute. Whereas, defensively, approaches to countering such threats are too often siloed. 

Existing frameworks for mapping the stages of cyber attacks often don’t account for the role of influence techniques within the lifecycle of these. By joining up our understanding and building wider capabilities that address a broader spectrum of cyber-enabled threats, we can create greater resilience to these complex challenges. 

By considering the broader cyber threat context, we can also learn lessons from how the cyber security community has established and scaled effective ways of collaborating and sharing best practices, vulnerabilities and knowledge to help increase resilience and counter emerging threats.

Improving societal understanding 

This is a threat that requires greater focus and discussion. We need to help society understand this challenge, in the same way we have broadened the awareness around traditional cyber security over the past few years.

Equipping citizens with the understanding and practical tools and guidance to help them safely navigate information in a world of disinformation is critical and will go a long way to reducing society’s susceptibility to such challenges.

Governments cannot act alone 

Critically, protecting against the threat of disinformation is often assumed to be a challenge for governments alone. As we’ve learnt with cyber security, it requires collective action – across all aspects of our economy and society. Government can’t act alone and must work with industry to develop and scale technical solutions that can help in protecting against the risks from disinformation and wider cyber effects.

As it stands, the market for products and technologies is still relatively nascent – and a very niche category of technology – but early adoption of promising solutions by governments and corporates can act as a strong signal to investors that there’s a sizeable opportunity worth enabling through increased commitment and backing. As cross-cutting challenges that aren’t constrained to one domain, industry or particular sector, collaboration is key to help shape industry standard responses.

Critical safeguarding 

We need to safeguard those facing the highest degree of risks; this includes those who we rely on to discharge our democratic will, as well as journalists and other critical functions and institutions within a safe and effective, functioning democracy.

Both London mayor Sadiq Khan and Labour leader Sir Keir Starmer have already been targeted by deepfake attempts in recent months, with invented audio clips designed to damage them. In such an important election year, it’s critical that politicians and the media don’t amplify information that cannot be verified. 

Ahead of the curve

Finally, we need to be ahead of the curve. We must act now because while the impact of this threat is unclear, the fact remains that the scale of the threat is almost certainly set to get worse with the proliferation of AI.

This requires clear leadership from policy makers, coupled with investment and execution by industry and academia to build and scale our collective resilience across the whole of our society. 

This year is critical for democracies around the world and there’s much worry in the face of this emerging challenge. By acting together – and taking steps against the things above – we can build resilience and safeguard the integrity of the processes and institutions that we rely upon the most.



Source link