| Last Updated:
Creating DOI. Please wait...
Publication bias is a major threat to the validity of a meta-analysis resulting in overestimated effect sizes. P-uniform is a meta-analysis method that corrects estimates for publication bias, but the method overestimates average effect size in the presence of heterogeneity in true effect sizes (i.e., between-study variance). We propose an extension and improvement of the p-uniform method called p-uniform*. P-uniform* improves upon p-uniform in three important ways, as it (i) entails a more efficient estimator, (ii) eliminates the overestimation of effect size in case of between-study variance in true effect sizes, and (iii) enables estimating and testing for the presence of the between-study variance in true effect sizes. We compared the statistical properties of p-uniform* with the selection model approach of Hedges (1992) as implemented in the R package “weightr” and the random-effects model in both an analytical and a Monte-Carlo simulation study. Results revealed that the statistical properties of p-uniform* and the selection model approach were generally comparable and outperformed the random-effects model if publication bias was present. We demonstrate that both methods estimate average true effect size rather well with two or more and between-study variance with ten or more primary studies in a meta-analysis. However, both methods do not perform well if the meta-analysis only includes statistically significant studies. We offer recommendations for correcting meta-analyses for publication bias in practice, and provide an R package and an easy-to-use web application for applying p-uniform*.
CC-By Attribution 4.0 International