Meta, the company that owns some of the biggest social networks in use today, has explained how it means to tackle disinformation related to the upcoming EU Parliament elections, with a special emphasis on how it plans to treat AI-generated content that’s meant to deceive.
The risks associated with disinformation
Earlier this year, the World Economic Forum (WEF) put misinformation and disinformation at the top of the list of the most pressing risks the world is facing in next couple of years.
This assessment has been influenced by the fact that large language models (LLMs) are being increasingly used to fuel disinformation with AI-created images, videos and text.
Another factor was the fact that, in 2024, more that 50 countries around the world will hold national elections, and AI-fueled disinformation is expected to be widely used to shift public opinion and disrupt electoral processes.
There’s no one or easy solution for that problem, especially because some solutions can be inappropriately weaponized. “There is a risk of repression and erosion of rights as authorities seek to crack down on the proliferation of false information – as well as risks arising from inaction,” the WEF noted in its Global Risks Report 2024.
Disinformation on social media
In September 2023, after sharing the results of reports on disinformation and information manipulation online by major online platforms, Věra Jourová, the Vice-President of the European Commission, said that “upcoming national elections and the EU elections will be an important test for the [Code of Practice on Disinformation] that platforms signatories should not fail.”
“Platforms will need to take their responsibility seriously, in particular in view of the Digital Services Act that requires them to mitigate the risks they pose for elections,” she added.
Many platforms have been publishing reports on their efforts to curb influence operations, disinformation and misleading content for many years, but it’s becoming obvious that they must ramp up their efforts.
Meta’s plans to curb disinformation and handle AI-generated content and ads
Meta – owner of Facebook, Instagram, WhatsApp and the newly established Threads – is preparing for the EU Parliament elections by:
- Establishing a dedicated Elections Operations Center (manned by its intelligence, research, legal and other experts to identify and mitigate potential threats in real time)
- Expanding its network of fact-checking organizations across the EU (covering content in over 26+ languages)
- Identifying, labeling, removing or down-ranking AI-generated content aimed to deceive
“We remove the most serious kinds of misinformation from Facebook, Instagram and Threads, such as content that could contribute to imminent violence or physical harm, or that is intended to suppress voting,” Marco Pancini, Meta’s Head of EU Affairs, explained.
Debunked content that doesn’t violate those particular policies gets a warning label and its distribution is reduced. “When a fact-checked label is placed on a post, 95% of people don’t click through to view it,” Pancini added, and said that they also don’t allow ads that contain debunked content.
News from state controlled media is labeled as such and their posts demoted if they are part of deceptive campaigns or concerted influence operations.
Finally, AI-generated content will also be reviewed by fact-checkers and appropriately labeled, removed/disallowed and/or down-ranked. (Posters will also be able and be required to label AI-generated video or audio.)
“We already label photorealistic images created using Meta AI, and we are building tools to label AI generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads,” Pancini explained.
“Since AI-generated content appears across the internet, we’ve also been working with other companies in our industry on common standards and guidelines. This work is bigger than any one company and will require a huge effort across industry, government, and civil society.”
Previously, Meta shared similar plans to prepare for elections in 2024, which also include blocking new political ads during the final week of the US election campaign.
Other social networks’ anti-disinformation plans
TikTok has recently also laid out its plans to counter disinformation aimed at interfering with European elections in 2024.
“Next month, we will launch a local language Election Centre in-app for each of the 27 individual EU Member States to ensure people can easily separate fact from fiction. Working with local electoral commissions and civil society organisations, these Election Centres will be a place where our community can find trusted and authoritative information,” said Kevin Morgan, the company’s Head of Safety & Integrity, EMEA.
Like Meta, they partner with many fact-checking organizations covering content in different European languages and are working to counter misinformation. They also plan to invest in media literacy campaigns on their platform and will work to detect and disrupt deceptive actors operating during elections.
Last September, Jourová noted that Elon Musk-owned X (formerly Twitter) “is the platform with the largest ratio of mis/disinformation posts.”
Though X/Twitter is no longer among the signatories of the anti-disinformation Code, it has to comply with the EU Digital Services Act, and has agreed to do so.
“With Operation Texonto and other well set up disinformation campaigns – as well as with the broad availability of AI models capable of creating very convincing audio, video and picture content – we’ve already got a bitter taste of what’s up ahead,” Tony Anscombe, Chief Security Evangelist for ESET, commented Meta’s announcement.
“Social media platforms of any kind, but most notably Facebook, X and TikTok, will play a major role in the distribution of fake content so it’s late but hopefully not too late to identify and combat such threats. We welcome the initiative and look forward to hearing more about how it will work in practice in due course.”