Main content

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Project

Description: Forbes, Wright, Markon, and Krueger claim that psychopathology networks have "limited" or "poor" replicability, supporting their argument primarily with data from two waves of an observational study on depression and anxiety. They developed “direct metrics” to gauge change across networks (e.g., change in edge sign), and used these results to support their conclusion. Three key flaws undermine their critique. First, nonreplication across empirical datasets does not provide evidence against a method; such evaluations of methods are possible only in controlled simulations when the data-generating model is known. Second, they assert that the removal of shared variance necessarily decreases reliability. This is not true. Depending on the causal model, it can either increase or decrease reliability. Third, their direct metrics do not account for normal sampling variability, leaving open the possibility that the direct differences between samples are due to normal, unproblematic fluctuations. As an alternative to their direct metrics, we provide a Bayesian re-analysis that quantifies uncertainty and compares relative evidence for replication (i.e., equivalence) versus nonreplication (i.e., nonequivalence) for each network edge. This approach provides a principled roadmap for future assessments of network replicability. Our analysis indicated substantial evidence for replication and scant evidence for nonreplication.

Files

Loading files...

Citation

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.