Materials for: Co-Writing with Opinionated Language Models Affect Users' Views
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Description: If large language models like GPT-3 produce some views more often than others, they may influence people's opinions on an unknown scale. This study investigates whether large language models that preferably generate a particular opinion affect what users write and believe. In an online experiment, we asked participants (N=1,506) to reply to a post discussing whether social media is good for society. Treatment group participants saw suggestions from a writing assistant powered by a version of GPT-3, configured to support a specific side of the debate. Following the writing task, participants completed a social media attitude survey and an independent set of judges (N=500) evaluated the opinions expressed in participants' writing. The results show that interacting with an opinionated language model affected not only the opinion participants expressed in writing, but also shifted participants' opinion in a subsequent attitude survey. Drawing on the social influence literature and nudge theory, we discuss how opinionated AI language technologies may influence people's views. We discuss the wider implications of our results and conclude that that the opinions built into large language models need to be monitored and engineered more carefully. If you wish your data to be removed from the repository, please contact the study's corresponding author.