Main content

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Project

Description: There is consensus that basing conclusions on confidence intervals for effect size estimates is generally superior to relying on null hypothesis significance testing. However, confidence intervals in psychology are typically very wide. One reason for this is a lack of easily applicable methods for planning studies to achieve sufficiently narrow confidence intervals. This paper presents tables and freely accessible R functions to facilitate planning studies for the desired accuracy in parameter estimation (i.e., Cohen’s d as an effect size). In addition, the importance of such accuracy is demonstrated using data from the Reproducability Project: Psychology. It is shown that the sampling distribution of Cohen’s d is very wide unless sample sizes are considerably larger than what is common in psychology studies. This means that effect size estimates can vary substantially from sample to sample, even with perfect replications. The Reproducability Project: Psychology replications’ confidence intervals for Cohen’s d have widths of around 1 standard deviation (the 95% confidence interval for the widths runs from 1 to 1.34 with a median width of 0.96). This means that replications of these replications are likely to find effect size point estimates that may vary substantially from the estimates from those original replications. The consequence is that researchers in psychology, but also the funders of research in psychology, will have to get used to conducting considerably larger studies if they are to build a strong evidence base.

Wiki

Add important information, links, or images here to describe your project.

Files

Loading files...

Citation

Components

Knowing how effective an intervention, treatment, or manipulation is and increasing replication rates: accuracy in parameter estimation as a partial solution to the replication crisis

Although basing conclusions on confidence intervals for effect size estimates is preferred over relying on null hypothesis significance testing alone,...

Recent Activity

Loading logs...

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.