Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: This project presents the results of an artificial intelligence (AI) audit conducted to explore the propensity of several large language model (LLM)-powered chatbots to reiterate claims and sources promoted by the Kremlin’s propaganda apparatus. We examine whether ChatGPT, Gemini, Copilot, and Grok return Kremlin-linked disinformation or reference Kremlin-linked disinformation sources in response to political questions, and explore both consistent and stochastic variation in their outputs. Based on 416 outputs generated in response to disinformation-related queries in the United Kingdom and Switzerland, we also explain why and under what conditions chatbots can return or reference Kremlin-linked disinformation. The UK and Swiss datasets, as well as the document describing preliminary results, are available under the tab ‘Files’.
Files can now be accessed and managed under the Files tab.