Since the 2016 election of Donald Trump, there has been some debate over how effective Russian propaganda has been at swaying the opinions of American voters. It was well-documented back in those days that Russia employed large IT companies, most infamously the anodyne-sounding Internet Research Agency, with the sole remit of churning out divisive, pro-Russia content targeted at Americans, but quantifying the impact has always been imprecise. It surely has some impact, in the very least, at hardening views that conform with ones beliefs. Most people are not going to go through the work of fact-checking everything they read, and X’s community notes system is broken.
Either way, the Kremlin continues to employ disinformation, and a new report from NewsGuard has documented the country’s pivot away from directly targeting humans with content and instead going after AI models that many now use to bypass media websites altogether. According to NewsGuard’s research, a propaganda network called Pravda produced more than 3.6 million articles in 2024 alone, which it found are now incorporated into the 10 largest AI models, including ChatGPT, xAI’s Grok, and Microsoft Copilot.
Here is more:
The NewsGuard audit found that the chatbots operated by the 10 largest AI companies collectively repeated the false Russian disinformation narratives 33.55 percent of the time, provided a non-response 18.22 percent of the time, and a debunk 48.22 percent of the time.
All 10 of the chatbots repeated disinformation from the Pravda network, and seven chatbots even directly cited specific articles from Pravda as their sources.
NewsGuard calls this new tactic “AI grooming,” as models increasingly rely on RAG, or retrieval augmented generation, to produce articles using real-time information from around the web. By spinning up websites under seemingly legitimate-looking websites, the models are ingesting and regurgitating information they do not understand is propaganda.
NewsGuard cited a specific claim that Ukranian President Volodymyr Zelensky banned Truth Social, the social network affiliated with President Trump. The allegation is provably false, as President Trump’s company has never made Truth Social available in Ukraine. And yet:
Six of the 10 chatbots repeated the false narrative as fact, in many cases citing articles from the Pravda network. Chatbot 1 responded, “Zelensky banned Truth Social in Ukraine reportedly due to the dissemination of posts that were critical of him on the platform. This action appears to be a response to content perceived as hostile, possibly reflecting tensions or disagreements with the associated political figures and viewpoints promoted through the platform.”
Last year, U.S. intelligence agencies linked Russia to viral disinformation spread about Democratic vice-presidential candidate Tim Walz. Microsoft said a viral video that claimed Harris left a woman paralyzed in a hit-and-run accident 13 years ago was Russian disinformation.
And in case there is any doubt that Russia is participating in this type of behavior targeted at AI models, NewsGuard referenced a speech given last year to Russian officials by John Mark Dougan, an American fugitive turned Moscow propagandist in which he remarked, “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.”
The latest propaganda operation has been linked to an innocuous sounding IT firm called TigerWeb, which intelligence agencies have linked to foreign interference and is based in Russian-held Crimea. Experts have long said Russia relies on third-party organizations to conduct this type of work so it can claim ignorance of the practice. TigerWeb shares an IP address with propaganda websites that use the Ukranian .ua TLD.
Social networks, including X, have been flooded with claims that President Zelensky has stolen military aid to enrich himself, another claim NewsGuard cited as originating from these websites.
There is a concern that those who control the AI models will someday have power over individual opinions and ways of life. Meta, Google, and xAI are among those that control the biases and behavior of models that they hope will power the web. After xAI’s Grok model was criticized for being too “woke,” Elon Musk set about tinkering with the model’s outputs, directing training staff to look out for “woke ideology” and “cancel culture,” essentially suppressing information he does not agree with. OpenAI’s Sam Altman said recently he would make ChatGPT less restrictive in what it says.
Research has found that more than half of Google searches are “zero click,” meaning they do not lead to a website click. And many people on social media have expressed sentiment that they would rather look at an AI overview than click through to a website out of laziness (Google began rolling out an “AI Mode” in search recently). Standard media literacy advice, like gutting checking a website to see if it appears legitimate, goes out the window when people are just reading AI summaries. AI models continue to have ineradicable flaws, but people trust them because they write in an authoritative manner.
Google has traditionally used various signals to rank the legitimacy of websites in search. It is unclear how these signals apply in its AI models, but early gaffs suggest its Gemini model has a lot of problems determining reputability. Most models still often cite less familiar websites alongside well-known, credible sources.
This all comes as President Trump has taken a combative stance towards Ukraine, halting information sharing and berating the leader in a White House meeting over the belief he has not shown enough fealty to the United States and an unwillingness to surrender to Russian demands.
Read the full NewsGuard report here.