Many Labs 2: Investigating Variation in Replicability Across Sample and Setting  /

Many Labs 2: Investigating Variation in Replicability Across Sample and Setting

Contributors:
  1. Reginald B. Adams, Jr.
  2. Katarzyna Cantarero
  3. Matthew Easterbrook
  4. Carolyn Finck
  5. Tanuka Ghoshal
  6. Melissa-Sue John
  7. Winfrida Malingumu
  8. Andrew Tang
  9. Ingrid Voermans

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Uncategorized

Description: We conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer reviewed in advance to examine variation in effect magnitudes across sample and setting. Each protocol was administered to approximately half of 125 samples and 15,305 total participants from 36 countries and territories. Using conventional statistical significance (p < .05), fifteen (54%) of the replications provided evidence in the same direction and statistically significant as the original finding. With a strict significance criterion (p < .0001), fourteen (50%) provide such evidence reflecting the extremely high powered design. Seven (25%) of the replications had effect sizes larger than the original finding and 21 (75%) had effect sizes smaller than the original finding. The median comparable Cohen’s d effect sizes for original findings was 0.60 and for replications was 0.15. Sixteen replications (57%) had small effect sizes (< .20) and 9 (32%) were in the opposite direction from the original finding. Across settings, 11 (39%) showed significant heterogeneity using the Q statistic and most of those were among the findings eliciting the largest overall effect sizes; only one effect that was near zero in the aggregate showed significant heterogeneity. Only one effect showed a Tau > 0.20 indicating moderate heterogeneity. Nine others had a Tau near or slightly above 0.10 indicating slight heterogeneity. In moderation tests, very little heterogeneity was attributable to task order, administration in lab versus online, and exploratory WEIRD versus less WEIRD culture comparisons. Cumulatively, variability in observed effect sizes was more attributable to the effect being studied than the sample or setting in which it was studied.

License: CC0 1.0 Universal

Files | Discussion Wiki | Discussion | Discussion

default Loading...

Wiki

Manuscript and Supplements Download manuscript here or view the supplement packets below: Included_Effects lists each study replicated as part of the project. Coordinating Proposal is an early proposal/recruitment document. SourceInfo is a spreadsheet describing conditions of data collection per site. The sub-Wiki page here provides more detail about this document. WEIRD Nations is a spreadshe...

Files

Loading files...

Citation

Tags

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.