In this meta-study, we analyzed 2,439 effect sizes from 131 meta-analyses in intelligence research to estimate the average effect size, median power, and evidence for bias in this field. We found that the average effect size in intelligence research was a Pearson’s correlation of .26, and the median sample size was 60. We calculated the power of each primary study by using the corresponding meta-analytic effect as a proxy for the true effect. The median power across all studies was 52.7%, with only 31.7% of the studies reaching a power of 80% or higher. We documented differences in average effect size and median power between different types of in intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). Across all meta-analyses, we found evidence for small study effects, highlighting potential publication bias. We found no evidence that small study effects are stronger for studies from the US than for non-US studies (a US effect reflecting stronger competition in the United States). We also found no convincing evidence for the decline effect, early-extremes effect, or citation bias across meta-analyses. Even though the power in intelligence research seems to be higher than in other fields of psychology, intelligence research does not seem immune to the problems of replicability as documented in psychology.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information,
and information on cookie use.