Many Labs 3: Evaluating participant pool quality across the academic semester via replication  /

Observe, hypothesize, test, repeat: Luttrell, Petty, and Xu (2017) demonstrate good science

Date created: | Last Updated:


Creating DOI. Please wait...

Create DOI

Category: Project

Description: Many Labs 3 (Ebersole et al., 2016) failed to replicate a classic finding from the Elaboration Likelihood Model of persuasion (Cacioppo, Petty, & Morris, 1983; Study 1). Petty and Cacioppo (2016) noted possible limitations of the Many Labs 3 replication (Ebersole et al., 2016) based on the cumulative literature. Luttrell, Petty, and Xu (2017) subjected some of those possible limitations to empirical test. They observed that a revised protocol obtained evidence consistent with the original finding that the Many Labs 3 protocol did not. This observe-hypothesize-test sequence is a model for scientific inquiry and critique. To test whether these results advance replicability and knowledge transfer, we conducted direct replications of Luttrell et al. in nine locations (Total N = 1,219). We successfully replicated the interaction of need for cognition and argument quality on persuasion using Luttrell et al.’s optimal design (albeit with a much smaller effect size; p < .001; f2 = .025, 95%CI [.006, .056]) but failed to replicate the interaction that indicated that Luttrell et al.’s optimal protocol performed better than the Many Labs 3 protocol (p = .135, pseudo R2 = .002). Nevertheless, pragmatically, we favor the Luttrell et al. protocol with large samples for future research using this paradigm.

License: CC-By Attribution 4.0 International


Study Description Luttrell, Petty, and Xu (2016) reran a replication from Many Labs 3 (Ebersole et al., 2016) which they deemed to be suboptimal for investigating the effect of interest. They reran a version of the study similar to that which was used in Many Labs 3 and an improved version. They observed the original effect in the optimal version, but not in the Many Labs 3 version. In this study,...


Loading files...



  • Data

    Recent Activity

    Loading logs...


Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.