Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This paper has been published in *Journal of Experimental Social Psychology*. You can read it here: https://doi.org/10.1016/j.jesp.2017.04.011. If you are behind a paywall, click here: https://ucdavis.app.box.com/s/7xi7offstksajotj631up547ig9hmm9o. **To read the annotated syntax used in Wang, Sparks, Gonzales, Hess, & Ledgerwood (2017), click on "Syntax" in the "Components" section to the right.** **To download the registration form for flexible covariate analysis, click on "Registration Form" in the "Components" section to the right.** The practice of using covariates in experimental designs has become controversial. Traditionally touted by statisticians as a useful method to soak up noise in a dependent variable and boost power, the practice recently has been recast in a negative light because it can inflate Type I error. But in order to make informed decisions about research practices like this one, researchers need to know more about the actual size of their benefits and costs. In a series of simulations, we compared the Type I error rates and power of two analysis strategies that researchers might use when confronted with an unanticipated, independent covariate. In the baseline strategy, a researcher only analyzes the effect of the manipulation on the dependent variable; in the flexible-covariate strategy, she analyzes both the effect of the manipulation on the dependent variable and the effect adjusting for the covariate. We show that the flexible- covariate (vs. baseline) strategy inflates Type I error by a small amount, and that it boosts power substantially under certain circumstances. The flexible-covariate strategy tends to be most beneficial when the covariate is strongly correlated with the dependent variable in the population, and when the experimental design would have been only moderately powered (40%– 60%) without including the covariate in the analysis. We offer concrete recommendations for when and how to use independent covariates in experimental designs, and contextualize our findings within the movement toward quantifying tradeoffs in choosing among research strategies and optimizing the choice of strategy within a given research context.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.