Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
A manuscript about this project can be found [here][1]. A variable codebook for all studies can be found [here][2]. **Overview** This line of research investigates how individuals evaluate scientists based on events during the scientific process with a focus on the consequences that replication results may have for the reputations of the original researchers. This page explains the studies conducted in this line of work to date. The commentary linked above provides a summary of the results of the three completed studies. **Study 1** As an initial investigation, around 1500 participants from the SoapBox sample read various scenarios of scientists discovering effects and trying to replicate those effects. Next, participants made judgments between two scientists, AA who produced boring but reproducible effects and BB who produced exciting by not reproducible effects, on a number of dimensions. [Study 1 Materials][3] [Study 1 Data][4] [Study 1 Analysis Script][5] [Study 1 Data Summary][6] **Study 2** Next, we collected a sample of undergraduates from the University of Virginia. Our goal was to replicate the findings from study 1 using a different sample. Also, there were two key changes from study 1. First, we counterbalanced the order of the two parts of this study (judgements of X and Y and judgements of AA and BB). Given the overwhelming preference for researcher AA in study 1, we were concerned that participants were influenced by the first part of the study that placed an emphasis on replication. Second, we changed the descriptions of AA and BB. We were concerned that BB's description of producing "not reproducible" effects was too negative. We changed the descriptions of the two researchers to read: "Imagine two scientists AA and BB who demonstrate different characteristics in the results that they produce from their research. In this context, results that are certain are ones that have been shown to be reliable and reproducible. Results that are uncertain are ones where the reliability and reproducibility are unknown. Reproducible means that the results recur when the study is conducted again." [Study 2 Materials][7] [Study 2 Data][8] [Study 2 Analysis Script][9] [Comparisons with Study 1][10] **Study 3** In order to confirm the results of Study 2, we replicated the findings with the updated procedure with the participant source from Study 1 (SoapBox sample). [Study 3 Materials][11] [Study 3 Data (part 1)][12] [Study 3 Data (part 2)][13] [Study 3 Analysis Script][14] **Study 4** We collected a sample of psychology researchers using the same measures. Researchers reported similar opinions to the prior samples with few differences. [Study 4 Materials][15] [Study 4 Data][16] [Study 4 Analysis Script][17] Materials for reproducing the figure in the commentary can be found [here][18]. If you have any questions about this project, do not hesitate to contact me at cebersole@virginia.edu. ### MetaSciLog Information: **[research goals]** This project investigated how individuals evaluate scientists based on events during the scientific process. In particular, we focused on the consequences that replication results may have for the reputations of the original researchers. We surveyed the general public, undergraduates, and psychology researchers, asking them to evaluate a hypothetical scientists and research finding after reading about a number of possible study outcomes. Overall, assessments seem to be much more related to scientific processes rather than results. **[project status]** completed **[email]** cebersole@virginia.edu **[type of data]** We collected online survey data. All measures were within subjects. **[population]** We recruited from the general US population, undergraduates at University of Virginia, and from psychology researchers (through listservs and social media). **[link to data]** https://osf.io/dfwur/wiki/home/ **[link to codebook(s)]** https://docs.google.com/spreadsheets/d/1JeLOfRFM6vW7WDkTKSbdH1-fYTAsnDI7boQ64_kbufY/edit?usp=sharing **[link to analysis scripts]** https://osf.io/dfwur/wiki/home/ **[link to preprint]** https://osf.io/3rnz6/ **[link to published paper]** https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002460 **[looking for collaborators]** no **[last updated]** 2019-08-14 [1]: https://osf.io/3rnz6/ [2]: https://docs.google.com/spreadsheets/d/1JeLOfRFM6vW7WDkTKSbdH1-fYTAsnDI7boQ64_kbufY/edit?usp=sharing [3]: https://osf.io/zmdxt/ [4]: https://osf.io/bvu5s/ [5]: https://osf.io/uf9ax/ [6]: https://osf.io/yzpb7/ [7]: https://osf.io/9gs3z/ [8]: https://osf.io/egvpx/ [9]: https://osf.io/vz5hu/ [10]: https://osf.io/fepwr/ [11]: https://osf.io/cypgd/ [12]: https://osf.io/2dusy/ [13]: https://osf.io/hkv5x/ [14]: https://osf.io/brj48/ [15]: https://osf.io/4swzf/ [16]: https://osf.io/ftjvh/ [17]: https://osf.io/sjw83/ [18]: https://osf.io/6dh3v/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.