Main content

Contributors:

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Project

Description: This study investigates how different interaction formats and threat of manipulation influence accuracy and trustworthiness of quantitative, human judgments generated in small groups. Many institutional decisions are taken based on the aggregated judgement of small groups, such as task forces or teams who evaluate some objective criterion. Examples of such collective intelligence span eclectic contexts: Politicians judge effectiveness of measures to fight global pandemics (Haug et al., 2020), executive boards rate investment alternatives (Lovallo et al., 2020) and security expert committees assess terrorist activity (Friedman and Zeckhauser, 2014). Ideally, such aggregated judgment should be objectively accurate and trusted by decision makers, in order to correctly inform consequential decisions, e.g., which projects will get funding, or where to deploy counter-terrorism personnel. Both, accuracy and trustworthiness, may though well be in question, if group members have personal stakes in the decisions. Issues aggravate if these stakes are non-transparent, i.e., group members have a hidden agenda. Previous studies indicate that, depending on the arrangement of group interaction, collective intelligence in a setting threatened by manipulation due to hidden agendas is either accurate but not trusted or well trusted but inaccurate. In particular, judgments derived by physically interacting groups enjoy high levels of trust by decision makers. Unjustifiably however, these high levels of trust also persist if the accuracy of the interacting groups' collective judgment deteriorates in hidden agenda settings (Maciejovsky and Budescu, 2020). By contrast, highly structured, market based information aggregation proofs more robust against manipulation but lacks trust by decision makers (Kaplan et al., in press; Maciejovsky and Budescu, 2020). A way to extract collective intelligence from groups in hidden agenda settings that is simultaneously accurate and trusted is yet unidentified. As a consequence, many ill-informed and thus poor decisions with high societal stakes might be taken. For instance the threat of terrorist suspects might be underestimated because security officials involved in the evaluation follow the hidden agenda to reduce effort for case-handling and surveillance (Amjahid et al., 2017). From a theoretical perspective, it is not possible to clearly resolve such a hidden agenda situation, e.g., through sophisticated mechanism design and incentive schemes alone (Wittrock, in preparation). As such, I propose an experimental investigation on group interaction to empirically identify behavioral mechanisms that, if not solve, at least mitigate the negative effects of hidden agendas on accuracy and trust. In this way, two main questions are addressed: 1. To what extend do small groups generate less accurate and less trusted aggregate judgment if some group members pursue a hidden agenda? 2. If so, does an anonymous and structured format of group interaction mitigate this effect?

License: CC-By Attribution 4.0 International

Files

Loading files...

Citation

Tags

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.