| Last Updated:
Creating DOI. Please wait...
We analyzed 2,439 effect sizes from 131 meta-analyses in intelligence research to estimate the average effect size, median power, and evidence for bias in this field. We found that the typical effect size in this field was a Pearson’s correlation of .26, and the median sample size was 60. We calculated the power of each primary study by using the corresponding meta-analytic effect as a proxy for the true effect. The median power across all studies was 48.8%, with only 29.8% of the studies reaching a power of 80% or higher. We documented differences in average effect size and median power between different subfields in intelligence research (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). Across all meta-analyses, we found evidence for small study effects in meta-analyses, highlighting potential publication bias. The evidence for the small study effect being stronger for studies from the US than for non-US studies (a US effect) was weak at best. We found no clear evidence for the decline effect, early extremes effect, or citation bias across meta-analyses. Even though the power in intelligence research seems to be higher than in other fields of psychology, this field does not seem immune to the problems of replicability as documented in psychology.