Files | Discussion Wiki | Discussion | Discussion

default Loading...



Loading wiki pages...

Wiki Version:
<h2>Many Labs 2: Investigating Variation in Replicability Across Sample and Setting</h2> <p><strong>Abstract:</strong> Abstract text here. <br> <strong>Citation:</strong> Citation here.</p> <h3>Contents</h3> <p><strong>Development</strong></p> <p><a href="" rel="nofollow">Call for participation</a>: Many Labs 2 was an open project inviting researchers to participate in study design and data collection. This file is the original recruiting call and was advertised via social media and informally in social networks.</p> <p><a href="" rel="nofollow">Preregistered design and analysis plan</a>: Many Labs 2 was developed in informal consultation with original authors to get materials and feedback (some became co-authors of this project), and then was submitted to formal peer review in advance of data collection via the Registered Reports publishing model at Perspectives on Psychological Science (PPS). PPS conducted formal peer review for each of the 28 studies and the overall design. The <a href="" rel="nofollow">preregistered plan</a> was the final design following that review. A <a href="" rel="nofollow">summary table</a> of the 28 selected effects is also available. After data collection, Editor Dan Simons moved from PPS to launch a new APS journal -- Advances in Methods and Practices in Psychological Science (AMPPS). AMPPS took over the Registered Reports model from PPS, so the 2nd stage review of the final report went to AMPPS.</p> <p><strong>Project Materials</strong></p> <p><a href="" rel="nofollow">Qualtrics codebooks and study files</a>: Scripts for the full study for both study slates.</p> <p><a href="" rel="nofollow">Codebook for Primary Analyses</a>: An overview of the variables used in the primary analysis for each study. Also contains scoring keys for Gati, Norenzayan, and the SVO slider.</p> <p><a href="" rel="nofollow">Individual study materials</a>: Browse materials study by study</p> <p><strong>Analysis Code</strong></p> <p>Most information about the analysis is best found on the <a href="" rel="nofollow">GitHub Page</a>. For some helpful direct links see the <a href="" rel="nofollow">Analysis component</a>.</p> <p><strong>Data</strong></p> <p>See <a href="" rel="nofollow">data component</a> for details. Deidentified, processed datasets are available here: This is the comprehensive final dataset after executing the data processing and cleaning script. This dataset is most appropriate for reproducing the reported results.</p> <ul> <li>Slate 1, deidentified: <a href="" rel="nofollow">.csv</a>, <a href="" rel="nofollow">.rda</a></li> <li>Slate 2, deidentified: <a href="" rel="nofollow">.csv</a>, <a href="" rel="nofollow">.rda</a></li> </ul> <p><a href="" rel="nofollow">Training and holdout datasets</a>: To improve rigor of follow-up analyses, we split the dataset into training (1/3) and holdout (2/3) samples using selection without replacement (stratified per-site). If you intend to conduct additional analyses on Many Labs 2 data without first preregistering your plan, then we recommend you download the training data to conduct exploratory analyses, preregister the final code from those analyses, and apply that to the holdout sample. This will not entirely eliminate potential biases of having observed outcomes in the data, but it will maximize interpretability of statistical inferences from the holdout sample to the extent possible.</p> <p>Codebooks and study materials for working with the data are available in <a href="" rel="nofollow">this component</a>.</p> <p>Raw data: The raw data is available for reanalysis but is not posted publicly because it contains potentially identifying data of participants. Please contact Rick ( to request access to the raw data. Please include evidence of institutional (IRB) permission to work with the non-anonymous data and affirm that you will protect the confidentiality of participants following your institutional ethical guidelines. </p> <p><strong>Outputs</strong></p> <p>Note: Links will be added when paper is made publicly accessible</p> <p>Manuscript: Preprint version of the manuscript and Final published version</p> <p>Figures: Direct access to Figure 1, Figure 2, and Figure 3 from the paper. </p> <p>Tables: Direct access to Table 1, Table 2, Table 3, and Table 4 from the paper.</p> <p><strong>Supplements</strong></p> <p>Many supplemental documents may be found in the <a href="" rel="nofollow">Supplementary Materials</a> component. - <a href="" rel="nofollow">SourceInfo</a> is a spreadsheet describing conditions of data collection per site. - The sub-Wiki page <a href="" rel="nofollow">here</a> provides more detail about this document. - <a href="" rel="nofollow">WEIRD Nations</a> is a spreadsheet showing the calculations that define which samples are considered WEIRD or not. - The sub-Wiki page <a href="" rel="nofollow">here</a> provides more detail about this document. - <a href="" rel="nofollow">Post-Registration Changes</a> details all procedural changes made after the proposal was pre-registered. - <a href="" rel="nofollow">PoPS Supplementary Notes</a> details all analytic changes from the pre-registered analysis plan. - <a href="" rel="nofollow">Site Demographics</a> reports demographic information for each site of data collection. - <a href="" rel="nofollow">Mock Session Videos</a> contains video recordings of each site's data collection procedure. - <a href="" rel="nofollow">Max I2 Calculation</a> is a demonstration of the observed "I2 paradox" where samples showing seemingly little heterogeneity maintain a large I2. - <a href="" rel="nofollow">ML2 Code Review</a> shows results from an internal review of the analysis scripts.</p> <h3>About the project</h3> <p>Many Labs 2 was an expanded follow-up to the <a href="" rel="nofollow">Many Labs Project</a>.</p> <p>The Many Labs project replicated 13 classic and contemporary psychological effects with 36 different samples/settings (N = 6344). The results indicated that: (a) variations in sample and setting had little impact on observed effect magnitudes, (b) when there was variation in effect magnitude across samples, it occurred in studies with large effects, not studies with small effects, (c) overall, replicability was much more dependent on the effect of study rather than the sample or setting in which it was studied, (d) this held even across lab-web and across nations, and (e) two effects in a subdomain with substantial debate about reproducibility - flag and currency priming - showed no evidence of an effect in individual samples or in the aggregate. </p> <p>In Many Labs 2, we employed an expanded version of the Many Labs paradigm to investigate a substantial number of new effects (28) across more numerous and diverse labs (150+). In particular, the study included: (a) effects expected to vary in detectable effect size, (b) some effects thought to vary across cultural contexts and others that are not, (c) effects that are plausibly contingent on other features of the sample or setting, and (d) effects that are known to be replicable and others that are untested - including additional examples from “social priming” and other areas.</p>
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.

Create an Account Learn More Hide this message