Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Module Description** Because publication bias leads to misleading results in meta-analysis, a number of statistical methods have been proposed to try to adjust for this bias. This module reviews contemporary methods, their strengths and weaknesses, and examines the quality of adjustment that can be expected. **Learning Objectives** Know the statistical methods for bias adjustment: p-curve / p-uniform, selection models, PET-PEESE meta-regression, trim-and-fill. Understand how each method works, and thus, what assumptions they rely on and when they will perform well or poorly. Calibrate your expectations for what publication bias adjustments can do. **Readings** [Egger, Smith, Schneider, and Minder. Bias in meta-analysis detected by a simple, graphical test.][1] [Simonsohn, Nelson, and Simons. P-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results.][2] [Vevea & Hedges. A general linear model for estimating effect size in the presence of publication bias.][3] [Guan & Vandekerkhove. A Bayesian approach to mitigation of publication bias.][6] Optional module: Current discussions in adjustment quality. [Carter, Schonbrodt, Hilgard, Gervais. Correcting for bias in psychology: A comparison of meta-analytic methods.][7] [Simonsohn, Simmons, and Nelson. Why p-curve excludes p > .05.][8] Optional module: Popular methods that don't work: [Fail-safe N][4] [Trim-and-fill][5] **Demonstrations** A bulleted list of (hyperlinked) activities goes here **Assignments** Load [this data][9] into your meta-analysis package of choice. Filter the data to only experiments of aggressive behavior. (Outcome == "AggBeh" & Setting == "Exp"). Make a funnel plot using the effect size (Fisher.s.Z) and standard error (Std.Err). Then, filter for only the "best" studies (Best. == "y") and make the funnel plot again. Does filtering for the "best" studies seem to increase or decrease publication bias? [1]: http://www.bmj.com/content/315/7109/629.short [2]: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2377290 [3]: https://link.springer.com/article/10.1007/BF02294384 [4]: http://crystalprisonzone.blogspot.com/2016/07/the-failure-of-fail-safe-n.html [5]: https://crystalprisonzone.blogspot.com/2017/05/trim-and-fill-just-doesnt-work.html [6]: https://link.springer.com/article/10.3758/s13423-015-0868-6 [7]:https://osf.io/preprints/psyarxiv/9h3nu [8]:http://datacolada.org/61 [9]:https://github.com/Joe-Hilgard/Anderson-meta/blob/master/cleaned_data.txt
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.