Americans are being taught to trust propaganda. Often, it’s not intentional. A classic bit of advice for separating propaganda from real research is “Check the citations.” If the sources support the analysis, the material can be trusted. But AI is changing the rules of the game.
In December, the White House announced new guidance to ensure that AI tools procured for government use are “truthful” and “ideologically neutral,” including transparency around citation practices. But even with this new oversight there is a structural issue that the memo can’t fix; authoritarian states are optimizing their propaganda for AI consumption while America’s most credible news sources are actively blocking AI tools. This means that even ideologically neutral AI directs users towards state-aligned propaganda — simply because that is what is freely available.
Those who trust AI citations wind up trusting propaganda while believing they are doing responsible research.
Most large language models (LLMs) provide sources along with their analysis. But these models do not choose what sources to cite based on credibility. Rather, they choose based on availability. Many of the best sources, like top U.S. news outlets, are behind paywalls or are blocking the automated systems that AI uses to scan and collect information. These legacy media companies are slowly litigating and negotiating individual licensing deals with AI unicorns.
Authoritarian states, on the other hand, have optimized their content for accessibility. State-run media, like Qatar’s Al Jazeera, or Russian and Chinese outlets published in English, are free. That results in students, academics and federal analysts seeking to understand Gaza, Ukraine, or Taiwan being more likely to engage with state-backed propaganda than independent journalism.
Research from the Foundation for Defense of Democracies analyzing three major LLMs (ChatGPT, Claude, and Gemini) found that 57 percent of responses to questions about current international conflicts cited state-aligned propaganda sources.
When AI tools answer questions about contested conflicts — including Gaza, Ukraine, and Taiwan — they draw on enormous training data. While not perfect, the responses are often more nuanced than any one commentator or media outlet. But LLMs then funnel their hundreds of millions of users to a narrow subset of sources that it serves up as citations. FDD research found that 70 percent of neutral questions about the Israel-Gaza conflict yielded Al Jazeera citations.
This isn’t a minor technical flaw — citations are the attribution architecture shaping what Americans learn to trust.
While Western legacy media certainly carries its own biases, there is a crucial difference between editorial bias and state-controlled narratives. In 2024 alone, Russia-backed propaganda aggregator Pravda flooded the internet with more than 3.6 million articles from pro-Kremlin influencers and government spokespeople, in order to saturate the space with pro-Russian narratives.
AI sometimes fabricates information, or “hallucinate,” and that presents real risks. But urging people to “check the linked sources” can end up steering them straight to state-controlled media. Those links aren’t citations in the traditional sense — they are traffic directions. And the traffic they generate turns into revent, which ultimately determines which news outlets survive. AI platforms are becoming the internet’s traffic arbiters, and right now they’re systematically directing traffic away from independent journalism and toward state-controlled propaganda.
AI companies must bring credible journalism into their systems. There is no question that quality journalism requires resources and revenue to survive. Unfortunately, the licensing deals that are being negotiated now between LLM companies and media outlets are moving slowly. Every delay allows citation patterns to harden while we are increasingly vulnerable to foreign influence.
There’s no silver bullet, but a patchwork of solutions can help. The White House has already taken a strong stance by requiring agency heads to restrict AI procurement to LLMs that are “ideological neutral” and not “in favor of ideological dogmas.” Vendors selling to the U.S. government should present data on citation influence.
An LLM literacy campaign is needed so users understand citation bias. But awareness alone isn’t enough — AI companies should give lower priority to state-controlled media in their outputs and label them as such. And as LLMs evolve from being a consumer technology into a common infrastructure like the internet itself, citation patterns should be considered in AI safety frameworks — because a healthy democratic society needs a broad array of media sources, and that means independent journalism will always need support.
Leah Siskind is director of impact and an AI research fellow at the Foundation for Defense of Democracies.
