Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Psychological research is rife with inappropriately concluding “no effect” between predictors and outcome in regression models following statistically nonsignificant results. This approach is methodologically flawed, however, because failing to reject the null hypothesis using traditional, difference-based tests does not mean the null is true. Using this approach leads to high rates of incorrect conclusions which flood psychological literature. This paper introduces a novel, methodologically sound alternative. In this article, we demonstrate how to apply equivalence testing to evaluate whether a predictor (measured in standardized or unstandardized units) has a negligible association with the outcome in multiple linear regression. We constructed a simulation study to evaluate the performance of two equivalence-based methods and compared it to the traditional test. The use of the proposed negligible effect testing methods is illustrated in this paper using functions from the `negligible` R package along with examples from the literature and recommendations for results reporting and interpretations are discussed.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.