We stand at a crossroads for election misinformation: on one side our election apparatus has reached a higher level of security and is better defended from malicious attackers than ever before. On the other side, the rise of artificial intelligence (AI) has become one of the most sophisticated potential threats of misinformation.
The challenge
This democratization of AI has made it possible for any number of malicious actors to get involved in the misinformation business. This ranges from the nation-state actors tracked in the 2016 election to those who might be financially motivated and political activists trying to advance their various agendas. In the last few years, AI-based tools have improved and are more capable, almost becoming a commodity that can quickly scale up capabilities. AI has become easier to use and can easily create custom identities for specific targets.
The challenge, when we look at the run-up to the 2024 election, is balancing the need for cybersecurity to defend against these potential risks with the ability to hold a fair election that is free from potential threats. Cloudflare reported that from November 2022 to August 2023, it mitigated more than 60,000 daily threats to US elections groups it surveyed, including numerous denial-of-service attacks. It isn’t just the US that is of concern – in 2024, there are 70 elections scheduled in 40 different countries, including elections for many heads of state.
This makes AI a powerful platform, especially when combined with the relatively low cybersecurity skills training of election workers. AI has also narrowed the line between good and evil uses, weaponizing social media since it can quickly create numerous posts containing misinformation or false claims, which are then amplified by their users across their networks. Adding more fuel to the fire, earlier this year New York City Mayor Eric Adams officially designated social media as an environmental toxin.
Education is needed
That is not to say that AI is all bad news. With the right security guardrails in place, AI has the potential to help create a more informed electorate. It could be used by voters to better inform their decisions and identify the candidate that best speaks to them and their needs. It could summarize trends and analysis that previously was only possible from more skilled quarters.
However, a great deal of education is also required, which should happen through a better and more informed government oversight. That is happening, albeit slowly. Late last year, the Biden campaign created a special task force to respond to AI-generated misinformation and propose a variety of legal strategies leveraging existing laws to curb it and to educate the public on all uses of deep fakes and other issues. The White House has also issued an executive order laying out steps to make AI safer and more trustworthy that was signed last fall.
The issue for AI regulation in the US is that it spans a multitude of federal agencies’ responsibilities. And while there are currently no federal restrictions for political campaigns when it comes to using AI-generated content in ads or other political materials, both Texas and California have enacted criminal penalties, and other states are considering such laws in their current legislative cycles. The Brennan Center is tracking a number of proposed bills in Congress that will regulate deep fakes and AI algorithms.
While these laws – if enacted – are helpful, none of them are specific to the election process. These efforts need to be supplemented with best practices in elections’ cybersecurity hygiene as soon as possible to help safeguard the upcoming votes.
This could include hiring AI and data protection officers by both national parties as well as by the individual candidates, and should be considered analogous to how these entities employ physical security. Another effort has been by the Center for Internet Security, that has continued to improve its tools and resources to help election workers deploy the most secure systems possible.
We need the best of the best to defend our elections against these attacks. As is the case with non-AI-related cybersecurity, attackers only need to score once to win.