Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
The purpose of this survey is to explore how our 16 studies compare to other similar studies in the literature that have a known pre-registered reproducibility rate by independent teams. We explore how individuals are able to predict not just the direction of the studies when given descriptions of what each group does, but also whether they predict whether the study would replicate. Research has shown that lay people are able to predict whether previous studies replicate at 59% accuracy (Hoogeveen et al., 2020). We therefore use this population (from Amazon mechanical Turk) as our primary predicting group. [As a supplementary comparison, we also ask a group of social researchers from the 'social psychology' branch of Sociology. This is to provide wihtin-study comparisons of the ability to predict results and replicability against the lay population.] Descriptions are first taken from the published work on prediciton markets and predictions by lay people (Dreber et al., 2015; Hoogeveen et al., 2020). The original authors of the descriptions used in the prediction challenges (A. Dreber, [E.J. Wagenmakers]) adapted each study description so the outcome of the study was not obvious (e.g. hypothesis neutral). In addition, these authors adapted the 16 studies we ran using the same procedures they used to write the other study descriptions. All of the studies run as part of this project met the following criteria: were social-behavioral science studies in the realm of social psychology, judgment and decision-making, political psychology, marketing. Each study was also a 2-group between subjects design. FInally, each study was also replicated online. We coded the studies used in the previous prediction challenges for whether they were studies on similar topics (mostly not cognitive psychology), whether the key outcome was the result of a 2-group between-subjects design, and whether the replication was run online. One coder first coded the studies, while a second coder independently coded the studies. Disagreements were discussed until 100% agreement was reached. The focus of our study is the 29 studies that met those three criteria (content, method, and format). We do not choose any subset of these 29 studies to make as fair a comparison as possible. **Acknowledgments** We would like to thank Mallory Kidwell and Anna Dreber and [] for their help with this survey. **References** Dreber, A., Pfeiffer, T., Almenberg, J., Isaksson, S., Wilson, B., Chen, Y., ... & Johannesson, M. (2015). Using prediction markets to estimate the reproducibility of scientific research. *Proceedings of the National Academy of Sciences*, 112(50), 15343-15347. Hoogeveen, S., Sarafoglou, A., & Wagenmakers, E. J. (2020). Laypeople Can Predict Which Social-Science Studies Will Be Replicated Successfully. *Advances in Methods and Practices in Psychological Science*, 3(3), 267-285.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.