Main content
Examining the replicability of online experiments selected by a decision market
- Colin Camerer
- Yiling Chen
- Anna Dreber Almenberg
- Felix Holzmeister
- Suzanne Hoogeveen
- Juergen Huber
- Lawrence Jin
- Magnus Johannesson
- Michael Kirchler
- Alexander Ly
- Benjamin Mandl
- Dylan Manfredi
- Gideon Nave
- Brian A. Nosek
- Thomas Pfeiffer
- Alexandra Sarafoglou
- Rene Schwaiger
- Eric-Jan Wagenmakers
- Viking Waldén
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: In this study, we test the feasibility of using decision markets to select studies for replication and provide evidence about the replicability of online experiments. Social scientists (n = 162) traded on the outcome of close replications of 41 systematically selected MTurk social science experiments published in PNAS 2015–2018, knowing that the 12 studies with the lowest and the 12 with the highest final market prices would be selected for replication, along with two randomly selected studies. The replication rate, based on the statistical significance indicator, was 83% for the top-12 and 33% for the bottom-12 group. Overall, 54% of the studies were successfully replicated, with replication effect size estimates averaging 45% of the original effect size estimates. The replication rate varied between 54% and 62% for alternative replication indicators. The observed replicability of MTurk experiments is comparable to that of previous systematic replication projects involving laboratory experiments.