Main content

Date created: | Last Updated:


Creating DOI. Please wait...

Create DOI

Category: Project

Description: People are increasingly likely to obtain advice from algorithms. But what does taking advice from an algorithm (as opposed to a human) reveal to others about the advice seekers’ goals? In five studies (total N = 1927), we find that observers attribute the primary goal that an algorithm is designed to pursue in a situation to advice seekers. As a result, when explaining advice seekers’ subsequent behaviors and decisions, primarily this goal is taken into account, leaving less room for other possible motives that could account for people's actions. Such secondary goals are, however, more readily taken into account when (the same) advice comes from human advisors, leading to different judgments about advice seekers’ motives. Specifically, advice seekers’ goals were perceived differently in terms of fairness, profit-seeking, and prosociality depending on whether the obtained advice came from an algorithm or another human. We find that these differences are in part guided by the different expectations people have of the type of information that algorithmic- vs. human advisors take into account when making their recommendations. The presented work has implications for (algorithmic) fairness perceptions and human-computer interaction.


Loading files...



Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.