Main content

Contributors:
  1. Maryna Sydorova

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Project

Description: This project presents the results of an artificial intelligence (AI) audit conducted to explore the propensity of several large language model (LLM)-powered chatbots to reiterate claims and sources promoted by the Kremlin’s propaganda apparatus. We examine whether ChatGPT, Gemini, Copilot, and Grok return Kremlin-linked disinformation or reference Kremlin-linked disinformation sources in response to political questions, and explore both consistent and stochastic variation in their outputs. Based on 416 outputs generated in response to disinformation-related queries in the United Kingdom and Switzerland, we also explain why and under what conditions chatbots can return or reference Kremlin-linked disinformation. The UK and Swiss datasets, as well as the document describing preliminary results, are available under the tab ‘Files’.

Files

Files can now be accessed and managed under the Files tab.

Citation

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.