Headline
Chatbots Are Pushing Sanctioned Russian Propaganda
ChatGPT, Gemini, DeepSeek, and Grok are serving users propaganda from Russian-backed media when asked about the invasion of Ukraine, new research finds.
OpenAI’s ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok are pushing Russian state propaganda from sanctioned entities—including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives—when asked about the war against Ukraine, according to a new report.
Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids—where searches for real-time data provide few results from legitimate sources—to promote false and misleading information. Almost one-fifth of responses to questions about Russia’s war in Ukraine, across the four chatbots they tested, cited Russian state-attributed sources, the ISD research claims.
“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” says Pablo Maristany de las Casas, an analyst at the ISD who led the research. The findings raise serious questions about the ability of large language models (LLMs) to restrict sanctioned media in the EU, which is a growing concern as more people use AI chatbots as an alternative to search engines to find information in real time, the ISD claims. For the six-month period ending September 30, 2025, ChatGPT search had approximately 120.4 million average monthly active recipients in the European Union, according to OpenAI data.
The researchers asked the chatbots 300 neutral, biased, and “malicious” questions relating to the perception of NATO, peace talks, Ukraine’s military recruitment, Ukrainian refugees, and war crimes committed during the Russian invasion of Ukraine. The researchers used separate accounts for each query in English, Spanish, French, German, and Italian in an experiment in July. The same propaganda issues are still present in October, Maristany de las Casas says.
Amid widespread sanctions imposed on Russia since its full-scale invasion of Ukraine in February 2022, European officials have sanctioned at least 27 Russian media sources for spreading disinformation and distorting facts as part of its “strategy of destabilizing” Europe and other nations.
The ISD research says chatbots cited Sputnik Globe, Sputnik China, RT (formerly Russia Today), EADaily, the Strategic Culture Foundation, and the R-FBI. Some of the chatbots also cited Russian disinformation networks and Russian journalists or influencers that amplified Kremlin narratives, the research says. Similar previous research has also found 10 of the most popular chatbots mimicking Russian narratives.
OpenAI spokesperson Kate Waters tells WIRED in a statement that the company takes steps “to prevent people from using ChatGPT to spread false or misleading information, including such content linked to state-backed actors,” adding that these are long-standing issues that the company is attempting to address by improving its model and platforms.
“The research in this report appears to reference search results drawn from the internet as a result of specific queries, which are clearly identified. It should not be confused with, or represented as referencing responses purely generated by OpenAI’s models, outside of our search functionality,” Waters says. “We think this clarification is important as this is not an issue of model manipulation.”
Neither Google nor DeepSeek responded to WIRED’s request for comment. An email from Elon Musk’s xAI said: “Legacy Media Lies.”
In a written statement, a spokesperson for the Russian Embassy in London said that it was “not aware” of the specific cases that this report details but that it opposes any attempts to censor or restrict content on political grounds. “Repression against Russian media outlets and alternative points of view deprives those who seek to form their own independent opinions of this opportunity and undermines the very principles of free expression and pluralism that Western governments claim to uphold,” the spokesperson wrote.
“It is up to the relevant providers to block access to websites of outlets covered by the sanctions, including subdomains or newly created domains and up to the relevant national authorities to take any required accompanying regulatory measures,” says a European Commission spokesperson. “We are in contact with the national authorities on this matter.”
Lukasz Olejnik, an independent consultant and visiting senior research fellow at King’s College London’s Department of War Studies, says the findings “validate” and help contextualize how Russia is targeting the West’s information ecosystem. “As LLMs become the go-to reference tool, from finding information to validating concepts, targeting and attacking this element of information infrastructure is a smart move,” Olejnik says. “From the EU and US point of view, this clearly highlights the danger.”
Since Russia invaded Ukraine, the Kremlin has moved to control and restrict the free flow of information inside Russia: banning independent media, increasing censorship, curtailing civil society groups, and building more state-controlled tech. At the same time, some of the country’s disinformation networks have ramped up activity and adopted AI tools to supercharge production of fake images, videos, and websites.
Across the ISD’s findings, around 18 percent of all prompts, languages, and LLMs returned results linked to state-funded Russian media, sites “linked to” Russia’s intelligence agencies, or disinformation networks, the research says. Questions about peace talks between Russia and Ukraine led to more citations of “state-attributed sources” than questions about Ukrainian refugees, for instance.
The ISD’s research claims that the chatbots displayed confirmation bias: The more biased or malicious the query, the more frequently the chatbots would deliver Russian state-attributed information. The malicious queries delivered Russian state-attributed content a quarter of the time, biased queries provided pro-Russian content 18 percent of the time, while neutral queries were just over 10 percent. (In the research, malicious questions to chatbots “demanded” answers to back up an existing opinion, whereas “biased” questions were leading but more open ended).
Of the four chatbots, which are all popular in Europe and collect data in real time, ChatGPT cited the most Russian sources and was most influenced by biased queries, the research claims. Grok often linked to social media accounts that promoted and amplified Kremlin narratives, whereas DeepSeek sometimes produced large volumes of Russian state-attributed content. The researchers say Google’s Gemini “frequently” displayed safety warnings next to the findings and had the overall best results out of the chatbots they tested.
Multiple reports this year have claimed a Russian disinformation network dubbed “Pravda” has flooded the web and social media with millions of articles as part of an effort to “poison” LLMs and influence their outputs. “Having Russian disinformation be parroted by a Western AI model gives that false narrative a lot more visibility and authority, which further allows these bad actors to achieve their goals,” says McKenzie Sadeghi, a researcher and editor at media watchdog company NewsGuard, who has studied the Pravda network and Russian propaganda’s influence on chatbots. (Only two links in the ISD research could be connected back to the Pravda network, the findings say).
Sadeghi claims the Pravda network in particular is quick to launch new domains where propaganda is published and says it can be particularly successful when there is little reliable information on a subject—the so-called data voids. “Especially related to the conflict [in Ukraine], they’ll take a term where there’s no existing reliable information about that particular topic or individual on the web and flood it with false information,” Sadeghi says. “It would require implementing continuous guardrails in order to really stay on top of that network.”
Chatbots may come under more pressure from EU regulators as their user base grows. In fact, ChatGPT may have already hit the threshold to be designated a Very Large Online Platform (VLOP) by the EU once it hits 45 million average monthly users. This status triggers specific rules that tackle the risk of illegal content and their impact on fundamental rights, public security, and well-being on those sites.
Even without qualifying for specific regulation, the ISD’s Maristany de las Casas argues that there should be a consensus across companies of what sources should not be referenced or should not appear on these platforms when they are linked to foreign states known for disinformation. “It could be providing users with further context, making sure that users understand the times that these domains have a conflict and even understanding why they’re sanctioned in the EU,” he says. “It’s not only an issue of removal, it’s an issue of contextualizing further to help the user understand the sources they’re consuming, especially if these sources are appearing amongst trusted, verified sources.”