### Abstract
Growing concern that most published results, including those widely agreed upon, may be false is rarely examined against rapidly expanding biomedical publications. Exact replications only occur on small scales due to prohibitive expense and limited professional incentive. We introduce a novel, high-throughput replication approach aligning 51,292 published claims about drug-gene interactions with high-throughput experiments performed through the NIH LINCS L1000 program. We propose that the likelihood of a published claim to replicate in future experiments depends in part on how scientific communities are networked in an increasingly collaborative system of science. We show (1) that claims reported in a single paper replicate 19.0% (95% confidence interval [CI], 16.9% to 21.2%) more frequently than expected, while those reported in multiple papers and widely agreed upon replicate 45.5% (95% CI, 21.8% to 74.2%) more frequently, manifesting collective correction in science. Nevertheless (2), among the 2,493 claims reported in multiple papers, centralized scientific communities perpetuate claims less likely to replicate even if widely agreed upon in the literature and irrespective of heterogeneity in high-throughput experiments, demonstrating how centralized, overlapping collaborations weaken collective inquiry. Decentralized research communities involve more independent teams and use more diverse methodologies, generating the most robust, replicable results. Our findings highlight the importance of science policies that foster decentralized collaboration to promote robust scientific advance. Our large-scale approach holds promise for simultaneously evaluating the robustness and replicability of numerous published claims.