Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
<h4> Contents </h4> 1. [Pre-Registered Paper][1] - definitions, motivation, research design and potential conclusions 2. [Appendix A][2] - measurement, power and alpha calculations 3. [Appendix B][3] - instructions and materials given to replicators 4. [Replication and pilot study][4] conducted by PIs <h4> Abstract </h4> The results among a population of researchers conducting the same research using samples drawn from the same population are bound to vary. We call this _researcher variability_ and identify it as a potential threat to the reliability of research. We distinguish two forms: routine and non-routine. Non-routine forms are deliberate choices researchers make. Routine forms are undeliberate actions under constraints. Non-routine variability can be controlled through observation and curation of researchers’ choices; however, routine researcher variability cannot, at least not entirely, making it worrisome. In this manuscript, we outline an experiment to provide the first known estimates of this phenomenon using replication in macro-comparative social science research. Replication comes with constraints necessary to observe this potentially elusive phenomenon. Macro-comparative secondary survey data analysis is an ideal testing ground because it imposes strong restrictions on researcher choice allowing us to focus on non-choice-based error. The experiment will exploit a crowdsourced replication effort involving more than 100 research teams, the _OSSC19 Crowdsourced Replication Initiative_. We will randomly assign these teams into two conditions, one replicating a published study and one replicating an anonymous version of the published study with less information. We hypothesize that routine researcher variability exists and, if so, aim to measure it as variance among replicator results. Moreover, we hypothesize that routine researcher variability depends on the amount of information provided to a researcher. Here the treatment with less information gives researchers more steps in the process of replication and this involves more choices, i.e., more opportunities for routine variability to affect the results. We hope to offer some solutions to the problem such as how many replicators are necessary to overcome reliability concerns, and to discuss the problem in a larger research context. [1]: https://osf.io/weu2v/ [2]: https://osf.io/ew4hy/ [3]: https://osf.io/xeksb/ [4]: https://osf.io/hkpdt/files/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.