Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Intervention studies based on randomized control trials represent a powerful tool for yielding insights into educational and developmental mechanisms. A question that researchers face is according to which standards or framework they should judge the size of an intervention’s effects in order to determine its (cost-) effectiveness. Particularly in longitudinal interventions and under conditions that allow high external validity, the amount of variance explained by an intervention can seem small. Variance within single intervention groups is often large due to participants’ individual characteristics and environmental factors. How should the results from often costly and laborious studies be interpreted? Currently, many researchers rely on the rule of thumb invented by Cohen (1988) according to which an effect of Cohen’s d = 0.2 indicates a small, 0.5 a moderate, and 0.8 a big effect. We argue that relying on this or newer conventions for effect sizes is rarely helpful because they do not consider the specific content and scientific rationales of intervention studies. Relying on rules of thumb limits the information content of effect sizes, just as the arbitrary threshold of p<.05 limits hypothesis testing. Based on an overview of Cohen’s rationales for inventing his rule of thumb and newer alternatives, we provide examples from educational and developmental research showing that even apparently small effects can matter for theory and practice, while apparently large effects do not always indicate the utility of an intervention. We argue that sound intervention research requires thoughtful considerations concerning the utility of an expected effect from the perspectives of different theories, research aims, and stakeholders such as the population of interest, policy makers, and practitioners. Moreover, what outcome would establish a null effect should already be considered in the planning phase, in order to enable equivalence testing. Non-significant group differences do not at all prove the absence of an intervention effect, although such wrong conclusions can be found even in high-ranking journals. We will present a systematic overview of 60 published studies in educational science with appropriate interpretations of effects as well as ones that leave room for improvement. Based on these examples, we will discuss criteria to be considered when planning an intervention study and how statistical modeling can contribute to making sound decisions.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.