Open Science Literature

Menu

Loading wiki pages...

View
Wiki Version:
<p><strong>Open Science in the Literature</strong></p> <hr> <p><em>These Zotero libraries contains a growing list of empirical and theoretical papers about open science.</em></p> <ul> <li><a href="https://www.zotero.org/groups/osf/items/collectionKey/6NTIIMHN" rel="nofollow">Open Science in the Scientific Literature</a></li> <li><a href="https://www.zotero.org/groups/osf/items/collectionKey/QK9TP9B9" rel="nofollow">Open Science: News and Editorials</a></li> </ul> <p>Another great collection of items is in this <a href="https://reproducibility.dash.umn.edu/" rel="nofollow">Reproducibility Bibliography</a>. </p> <p><em>Below is an annotated selection of papers related to the need for more transparent research and the effectiveness of such research practices. To add to this collection, please email David Mellor david@cos.io</em> </p> <p>@[toc]</p> <hr> <h2>Benefits of Transparency</h2> <p>Transparent and reproducible research practices help the researcher better organize their work and become more efficient, increases the impact and citation rate of their work, and of course helps the scientific community more quickly build upon preliminary discoveries.</p> <h3>Citation Advantages</h3> <ul> <li><a href="https://arxiv.org/pdf/1907.02565.pdf" rel="nofollow">The citation advantage of linking publications to research data</a></li> <li>Articles in which data were made available in a repository showed a clear citation advantage of up to 25%.</li> <li><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0000308" rel="nofollow">Sharing Detailed Research Data Is Associated with Increased Citation Rate</a></li> <li>"The 48% of trials with publicly available microarray data received 85% of the aggregate citations. Publicly available data was significantly (p = 0.006) associated with a 69% increase in citations..."</li> <li><a href="https://hal-hprints.archives-ouvertes.fr/hprints-00714715" rel="nofollow">On the Citation Advantage of linking to data: Astrophysics</a> </li> <li>"I find that the Citation Advantage presently (at the least since 2009) amounts to papers with links to data receiving on the average 50% more citations per paper per year, than the papers without links to data"</li> <li><a href="https://peerj.com/articles/175/" rel="nofollow">Data reuse and the open data citation advantage</a></li> <li>"...we found that studies that made data available in a public repository received 9% (95% confidence interval: 5% to 13%) more citations than similar studies for which the data was not made available."</li> <li><a href="https://arxiv.org/pdf/1111.3618.pdf" rel="nofollow">Linking to Data - Effect on Citation Rates in Astronomy</a></li> <li>"... articles with data links on average acquired 20% more citations (compared to articles without these links) over a period of 10 years."</li> <li><a href="https://jamanetwork.com/journals/jama/fullarticle/2670247" rel="nofollow">Altmetric Scores, Citations, and Publication of Studies Posted as Preprints</a></li> <li>"Articles with a preprint received higher Altmetric scores and more citations than articles without a preprint."</li> </ul> <h3>Community Benefits</h3> <ul> <li>"<a href="https://www.pewresearch.org/science/wp-content/uploads/sites/16/2019/08/PS_08.02.19_trust.in_.scientists_FULLREPORT_8.5.19.pdf" rel="nofollow">Trust and Mistrust in Americans’ Views of Scientific Experts</a>"</li> <li>Increase in research scientists has increased in recent years, and "...a majority of U.S. adults (57%) say they trust scientific research findings more if the researchers make their data publicly available. Another 34% say that makes no difference, and just 8% say they are less apt to trust research findings if the data is released publicly."</li> <li><a href="https://jamanetwork.com/journals/jamapediatrics/fullarticle/2511238" rel="nofollow">Real-Time Sharing of Zika Virus Data in an Interconnected World</a></li> <li>A case study demonstrating how real time data sharing benefited researchers and possibly patients.</li> <li><a href="http://opendatahandbook.org/value-stories/en/open-sourcing-genomes/" rel="nofollow">A quick release of genome data from a deadly E. coli breakout lead to faster and better health benefits.</a> </li> <li><a href="https://www.nature.com/news/a-long-journey-to-reproducible-results-1.22478" rel="nofollow">A long journey to reproducible results</a></li> <li>Replicating our work took four years and 100,000 worms but brought surprising discoveries, explain Gordon J. Lithgow, Monica Driscoll and Patrick Phillips.</li> <li><a href="https://www.ncbi.nlm.nih.gov/pubmed/28714729" rel="nofollow">Benefits of open and high-powered research outweigh costs.</a> (<a href="https://psyarxiv.com/fcxge" rel="nofollow">OA</a>)</li> </ul> <h3>Costs of closed practices ###</h3> <ul> <li><a href="https://physicstoday.scitation.org/do/10.1063/PT.6.1.20180822a/full/" rel="nofollow">The war over supercooled water</a></li> <li>How a hidden coding error fueled a seven-year dispute between two of condensed matter’s top theorists (which ended after code became open). </li> <li><a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002165" rel="nofollow">The Economics of Reproducibility in Preclinical Research</a></li> <li>"An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28B/year spent on preclinical research that is not reproducible—in the United States alone."</li> </ul> <hr> <h2>Data sharing policies and practices</h2> <ul> <li><a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002456" rel="nofollow">Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency</a></li> <li>Without imposing any mandates on authors, a journal was able to substantially increase the rate of data sharing by allowing them the opprotunity to signal these actions to their peers by using <a href="http://cos.io/badges" rel="nofollow">Open Practice Badges</a>. </li> <li><a href="http://www.fasebj.org/content/27/4/1304.short" rel="nofollow">Mandated data archiving greatly improves access to research data</a> (<a href="https://arxiv.org/pdf/1301.3744.pdf" rel="nofollow">preprint</a>)</li> <li>Policies that merely encourage data archiving do not affect actions. Policies that mandate data archiving are associated with higher rates of such actions, especially when combined with data accessibility statements. </li> <li><a href="http://www.collabra.org/articles/10.1525/collabra.13/" rel="nofollow">Are We Wasting a Good Crisis? The Availability of Psychological Research Data after the Storm</a><ul> <li>Data sharing policies that require researchers to make data available only when requested are ineffective.</li> </ul> </li> <li><a href="http://www.pnas.org/content/early/2018/03/08/1708290115" rel="nofollow">An empirical analysis of journal policy effectiveness for computational reproducibility</a></li> <li>"We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%. We find this policy—author remission of data and code postpublication upon request—an improvement over no policy, but currently insufficient for reproducibility."</li> <li><a href="http://www.sciencedirect.com/science/article/pii/S1053811913001742" rel="nofollow">The ethics of secondary data analysis: Considering the application of Belmont principles to the sharing of neuroimaging data</a><ul> <li>Applicable to any non-clinical human subjects research. The authors lay out the Belmont principles of justice, respect for persons, and beneficence, and then apply those principles into responsible data sharing and privacy steps, finally ending up with how these should apply during data sharing decisions. </li> </ul> </li> <li><a href="https://osf.io/preprints/socarxiv/9h7ay" rel="nofollow">Data policies of highly-ranked social science journals.</a></li> <li>"We conclude that a little more than half of the journals in our study have data policies. A greater share of the economics journals have data policies and mandate sharing, followed by political science/international relations and psychology journals." </li> <li><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0194768" rel="nofollow">Data sharing in PLOS ONE: An analysis of Data Availability Statements</a></li> <li>The proportion of articles in PLOS ONE with data availability statements has increased. The proportion of articles that comply with desired policy (shared data in persistent repository) is relatively low but increasing. </li> <li><a href="https://osf.io/preprints/bitss/39cfb/" rel="nofollow">Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition.</a></li> <li>The authors found that a data sharing mandate was effective at increasing the rates of data availability, with some exceptions. Approximately two-thirds of the data could be used to computational replicate the reported findings, though that often required assistance from the original authors. </li> <li><a href="https://www.jclinepi.com/article/S0895-4356%2818%2930606-1/pdf" rel="nofollow">Authors of trials from high-ranking anesthesiology journals were not willing to share raw data</a></li> <li>"Among 619 randomized controlled trials published in seven high-impact anesthesiology journals, only 24 (4%) had data sharing statements in the manuscript. When asked to share de-identified raw data from their trial, authors of only 24 (4%) manuscript shared data. Among 24 trials with data sharing statements in the manuscript, only one author actually shared raw data."</li> </ul> <hr> <h2>Reporting Standards, Guidelines, and Checklists</h2> <ul> <li><a href="https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/2046-4053-1-60" rel="nofollow">Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review</a><ul> <li>"The results of this review suggest that journal endorsement of CONSORT may benefit the completeness of reporting of RCTs they publish."</li> </ul> </li> <li><a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2000598" rel="nofollow">Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor</a><ul> <li>Few published studies or applications for animal experiments report key details of the experimental protocols. </li> </ul> </li> <li><a href="https://www.biorxiv.org/content/early/2017/09/12/187245" rel="nofollow">Findings of a retrospective, controlled cohort study of the impact of a change in Nature journals' editorial policy for life sciences research on the completeness of reporting study design and execution.</a> </li> <li><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0183591" rel="nofollow">A checklist is associated with increased quality of reporting preclinical biomedical research: A systematic review</a></li> <li><a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001756" rel="nofollow">Two Years Later: Journals Are Not Yet Enforcing the ARRIVE Guidelines on Reporting Standards for Pre-Clinical Animal Studies</a></li> <li><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0197882" rel="nofollow">ARRIVE has not ARRIVEd: Support for the ARRIVE (Animal Research: Reporting of in vivo Experiments) guidelines does not improve the reporting quality of papers in animal welfare, analgesia or anesthesia</a></li> <li><a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2813%2962228-X/fulltext" rel="nofollow">Reducing waste from incomplete or unusable reports of biomedical research</a></li> <li>"..inadequate reporting occurs in all types of studies—animal and other preclinical studies, diagnostic studies, epidemiological studies, clinical prediction research, surveys, and qualitative studies. In this report, and in the Series more generally, we point to a waste at all stages in medical research."</li> </ul> <hr> <h2>Publication Bias and Effects of Preregistration</h2> <ul> <li><a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00813/full" rel="nofollow">The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases</a></li> <li>"The median effect of studies published without pre-registration (i.e., potentially affected by those biases) of Mdnr = 0.36 stands in stark contrast to the median effect of studies published with pre-registration (i.e., very unlikely to be affected by the biases) of Mdnr = 0.16. Hence, if we consider the effect size estimates from replication studies or studies published with pre-registration to represent the true population effects we notice that, overall, the published effects are about twice as large."</li> <li><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0066844" rel="nofollow">Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review</a></li> <li>Research that reports the results of statistically significant findings is more likely to be published than null findings. </li> <li><a href="http://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-016-0639-x" rel="nofollow">Association between trial registration and treatment effect estimates: a meta-epidemiological study</a></li> <li>"Lack of trial prospective registration may be associated with larger treatment effect estimates."</li> <li><a href="http://www.bmj.com/content/356/bmj.j917" rel="nofollow">Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials ESORT)</a></li> <li>"Among published RCTs, there was little evidence of a difference in positive study findings between registered and non-registered clinical trials, even with stratification by timing of registration."</li> <li><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0132382" rel="nofollow">Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time</a></li> <li>"The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by <a href="http://clinicaltrials.gov" rel="nofollow">clinicaltrials.gov</a>, may have contributed to the trend toward null findings."</li> <li><a href="https://journals.sagepub.com/doi/abs/10.1177/0149206314527133" rel="nofollow">The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles</a> </li> <li>"...from dissertation to journal article, the ratio of supported to unsupported hypotheses more than doubled (0.82 to 1.00 versus 1.94 to 1.00)."</li> <li><a href="https://www.sciencedirect.com/science/article/pii/S0895435617311381" rel="nofollow">Registered trials report less beneficial treatment effects than unregistered ones: a meta-epidemiological study in orthodontics</a></li> <li>"Signs of bias from lack of trial protocol registration were found with non-registered trials reporting more beneficial intervention effects than registered ones."</li> <li><a href="https://peerj.com/articles/6232/" rel="nofollow">Registered reports: an early example and analysis</a></li> <li>"Although [Registered Reports] is usually seen as a relatively recent development, we note that a prototype of this publishing model was initiated in the mid-1970s by parapsychologist Martin Johnson in the European Journal of Parapsychology (EJP). A retrospective and observational comparison of Registered and non-Registered Reports published in the EJP during a seventeen-year period provides circumstantial evidence to suggest that the approach helped to reduce questionable research practices." </li> <li><a href="http://science.sciencemag.org/content/345/6203/1502" rel="nofollow">Publication bias in the social sciences: Unlocking the file drawer</a> (<a href="http://www.law.nyu.edu/sites/default/files/upload_documents/September%209%20Neil%20Malhotra.pdf" rel="nofollow">preprint</a>)</li> <li>The authors find a strong bias toward statistically significant findings in reported outcomes, even within a body of work where methodology and rigor did not vary. </li> <li><a href="https://www.cambridge.org/core/journals/psychological-medicine/article/cumulative-effect-of-reporting-and-citation-biases-on-the-apparent-efficacy-of-treatments-the-case-of-depression/71D73CADE32C0D3D996DABEA3FCDBF57#fndtn-information" rel="nofollow">The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: the case of depression</a></li> <li>Access to unpublished results via the FDA reviews allowed the authors to discover size of publication bias within this field. </li> <li><a href="http://www.nature.com/tp/journal/v7/n9/full/tp2017203a.html" rel="nofollow">Outcome reporting bias in randomized-controlled trials investigating antipsychotic drugs</a><ul> <li>"Of the 48 RCTs [from <a href="http://ClinicalTrials.gov" rel="nofollow">ClinicalTrials.gov</a>], 85% did not fully adhere to the prespecified outcomes [in the published article]." </li> </ul> </li> <li><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0197440" rel="nofollow">P values in display items are ubiquitous and almost invariably significant: A survey of top science journals</a></li> <li>"...the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome."</li> <li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5818784/" rel="nofollow">Association of Trial Registration With Reporting of Primary Outcomes in Protocols and Publications</a></li> <li>Discrepancies between the protocol and publication were more common in unregistered trials (6 of 11 trials [55%]) than registered trials (3 of 47 [6%]) (P &lt; .001). Only 1 published article acknowledged the changes to primary outcomes.</li> <li><a href="https://psyarxiv.com/3czyt/" rel="nofollow">Open Science challenges, benefits and tips in early career and beyond</a></li> <li>"We assessed the percentage of hypotheses that were not supportedand compared it to percentages previously reported within the wider literature. 61% of the studies we surveyed did not support their hypothesis (<a href="https://osf.io/wy2ek/" rel="nofollow">https://osf.io/wy2ek/</a>)" See Nature News article <a href="https://www.nature.com/articles/d41586-018-07118-1" rel="nofollow">here</a>.</li> </ul> <hr> <h2>Questionable research practices</h2> <ul> <li><a href="http://psr.sagepub.com/cgi/doi/10.1207/s15327957pspr0203_4" rel="nofollow">HARKing: Hypothesizing After the Results are Known</a><ul> <li>Using a dataset to generate and then test a hypothesis is circular reasoning that invalidates the test statistics. </li> </ul> </li> <li><a href="http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124" rel="nofollow">Why Most Published Research Is False</a><ul> <li>Small sample sizes, effect sizes, and unreported data analysis flexibility invalidate most statistical tests. </li> </ul> </li> <li><a href="http://pss.sagepub.com/lookup/doi/10.1177/0956797611417632" rel="nofollow">False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant</a><ul> <li>The authors show how even absurd claims can be supported by presenting only a subset of analyses and provide six concrete solutions to this problem. </li> </ul> </li> <li><a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2000995" rel="nofollow">Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions</a><ul> <li>Higginson and Munafo demonstrate how researchers are rewarded for underpowered research and how valuation of different research methods can affect researchers' self-interested actions. </li> </ul> </li> <li><a href="http://dx.plos.org/10.1371/journal.pone.0200303" rel="nofollow">Questionable research practices in ecology and evolution</a></li> <li>"...we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing)."</li> <li><a href="https://journals.sagepub.com/doi/abs/10.1177/0956797611430953" rel="nofollow">Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling</a></li> <li>"...we found that the percentage of respondents who have engaged in questionable practices was surprisingly high" </li> <li><a href="https://royalsocietypublishing.org/doi/10.1098/rsos.160384" rel="nofollow">The natural selection of bad science</a></li> <li>"The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. ... In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modelling.</li> </ul> <hr> <h2>The Reproducibility Crisis</h2> <ul> <li><strong>Many Labs 1</strong> "<a href="https://econtent.hogrefe.com/doi/full/10.1027/1864-9335/a000178" rel="nofollow">Investigating Variation in Replicability</a>" A “Many Labs” Replication Project</li> <li>"This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently. One effect – imagined contact reducing prejudice – showed weak support for replicability. And two effects – flag priming influencing conservatism and currency priming influencing system justification – did not replicate. We compared whether the conditions such as lab versus online or US versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect."</li> <li>"<strong>Many Labs 2</strong>: <a href="https://psyarxiv.com/9654g" rel="nofollow">Investigating Variation in Replicability Across Sample and Setting</a>"</li> <li>"Cumulatively, variability in observed effect sizes was more attributable to the effect being studied than the sample or setting in which it was studied."</li> <li>"<strong>Many Labs 3</strong>: <a href="https://www.sciencedirect.com/science/article/pii/S0022103115300123" rel="nofollow">Evaluating participant pool quality across the academic semester via replication</a>" </li> <li>"The university participant pool is a key resource for behavioral research, and data quality is believed to vary over the course of the academic semester. This crowdsourced project examined time of semester variation in 10 known effects, 10 individual differences, and 3 data quality indicators over the course of the academic semester in 20 participant pools (N = 2696) and with an online sample (N = 737)."</li> <li><a href="http://www.federalreserve.gov/econresdata/feds/2015/files/2015083pap.pdf" rel="nofollow">Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say "Usually Not"</a></li> <li>Using original data and code when available, the authors were able to computationally reproduce less than half of the original findings from their target sample of 67 studies. </li> <li><a href="http://science.sciencemag.org/content/351/6280/1433" rel="nofollow">Evaluating replicability of laboratory experiments in economics</a> </li> <li>Of 18 experimental studies published in economics were, 11 (61%) replicated primary findings. </li> <li><a href="http://science.sciencemag.org/content/349/6251/aac4716" rel="nofollow">Estimating the reproducibility of psychological science</a> (<a href="https://osf.io/447b3/" rel="nofollow">preprint</a>)<ul> <li>The authors attempted to replicate 100 studies from the published literature using higher powered designs and original materials and were able to replicate less than 40 original findings. </li> </ul> </li> <li>The <a href="https://elifesciences.org/collections/reproducibility-project-cancer-biology" rel="nofollow">Reproducibility Project: Cancer Biology</a></li> <li><a href="https://www.nature.com/nature/journal/v483/n7391/full/483531a.html" rel="nofollow">Drug development: Raise standards for preclinical cancer research</a></li> <li>Commerical attempts to confirm 53 landmark, novel studies resulted in 6 (11%) confirmed research findings. </li> <li><a href="http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html" rel="nofollow">Believe it or not: how much can we rely on published data on potential drug targets?</a></li> <li>Of 67 target-validation projects in oncology and cardiovascular medicine conducted at Bayer, 14 projects (20%) showed results that matched with published findings, but were highly inconsistent in 43. </li> <li><a href="http://www.nature.com/ng/journal/v41/n2/full/ng.295.html" rel="nofollow">Repeatability of published microarray gene expression analyses</a> </li> <li>In this study, Ioannidis et. al. attempted to repeat the analyses of 18 experiments using data from the original studies. The results of eight experiments were reproduced or partially reproduced. </li> <li><a href="https://www.nature.com/articles/s41562-018-0399-z" rel="nofollow">Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015</a></li> <li>"We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications... We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size." </li> <li><a href="https://link.springer.com/article/10.1007/s13164-018-0400-9" rel="nofollow">Estimating the Reproducibility of Experimental Philosophy</a></li> <li>"Drawing on a representative sample of 40 x-phi studies published between 2003 and 2015, we enlisted 20 research teams across 8 countries to conduct a high-quality replication of each study in order to compare the results to the original published findings. We found that x-phi studies – as represented in our sample – successfully replicated about 70% of the time. "</li> <li>"On the reproducibility of science: unique identification of research resources in the biomedical literature"</li> <li><a href="https://peerj.com/articles/148/" rel="nofollow">50% of scientific resources used in previously published articles were unidentifiable</a></li> <li>"The Economics of Reproducibility in Preclinical Research"</li> <li><a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002165" rel="nofollow">$28 billion annually in the US alone wasteful spent on research that cannot be replicated.</a> </li> <li><a href="https://peerj.com/articles/7654/" rel="nofollow">Rate and success of study replication in ecology and evolution</a><ul> <li>"[A]pproximately 0.023% of ecology and evolution studies are described by their authors as replications."</li> </ul> </li> </ul> <hr> <h2>Evaluating Journals Policies</h2> <ul> <li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5387793/" rel="nofollow">Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices</a></li> <li>"Thirty three journals [out of 1151] (3%) stated in their aims or instructions to authors that they accepted replications."</li> <li><a href="https://www.tandfonline.com/doi/abs/10.1080/08989621.2019.1591277?journalCode=gacr20" rel="nofollow">Effect of impact factor and discipline on journal data sharing policies</a></li> <li>""[We analyzed] the data sharing policies of 447 journals across several scientific disciplines, including biology, clinical sciences, mathematics, physics, and social sciences. Our results showed that only a small percentage of journals require data sharing as a condition of publication..."</li> <li><a href="https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2727849?guestAccessKey=6db00880-5864-4de7-8e82-a107a6323d98&utm_source=silverchair&utm_medium=email&utm_campaign=article_alert-jamainternalmedicine&utm_content=etoc&utm_term=050619" rel="nofollow">Evaluation of Journal Registration Policies and Prospective Registration of Randomized Clinical Trials of Nonregulated Health Care Interventions</a></li> <li>"Few journals in behavioral sciences or psychology, nursing, nutrition and dietetics, rehabilitation, and surgery require prospective trial registration, and those with existing registration policies rarely enforce them; this finding suggests that strategies for encouraging prospective registration of clinical trials not subject to FDA regulation should be developed and tested."</li> </ul> <hr> <h2>Recommendations for Increasing Reproducibility</h2> <ul> <li><a href="http://journals.sagepub.com/doi/pdf/10.1177/2515245917747656" rel="nofollow">Practical Tips for Ethical Data Sharing</a> (<a href="http://louisville.edu/mobileelsi/wgm-2-thought-leader-input-and-regulatory-framework/wgm-2-background-materials/practical-tips-for-ethical-data-sharing/view" rel="nofollow">OA</a>)</li> <li>"This Tutorial provides practical dos and don’ts for sharing research data in ways that are effective, ethical, and compliant with the federal Common Rule.</li> <li><a href="https://www.collabra.org/articles/10.1525/collabra.158/" rel="nofollow">A Practical Guide for Transparency in Psychological Science</a></li> <li>"Here we provide a practical guide to help researchers navigate the process of preparing and sharing the products of their research (e.g., choosing a repository, preparing their research products for sharing, structuring folders, etc.)."</li> <li>"<a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005510" rel="nofollow">Good enough practices in scientific computing</a>"</li> <li>"This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices... encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts..."</li> <li><a href="http://onlinelibrary.wiley.com/doi/10.1111/brv.12315/full" rel="nofollow">Detecting and avoiding likely false-positive findings – a practical guide</a></li> <li><a href="http://www.nature.com/articles/s41562-016-0021" rel="nofollow">A manifesto for reproducible science</a></li> <li><a href="http://www.pnas.org/content/early/2018/03/08/1708274114" rel="nofollow">The Preregistration Revolution</a></li> <li><a href="https://journals.sagepub.com/doi/full/10.1177/1745691612463078" rel="nofollow">An Agenda for Purely Confirmatory Research</a></li> <li><a href="https://academic.oup.com/beheco/article-abstract/doi/10.1093/beheco/arx003/3069145/Striving-for-transparent-and-credible-research?redirectedFrom=fulltext" rel="nofollow">Striving for transparent and credible research: practical guidelines for behavioral ecologists</a></li> <li><a href="http://onlinelibrary.wiley.com/doi/10.1002/ejsp.2023/abstract" rel="nofollow">Performing high-powered studies efficiently with sequential analyses</a></li> <li>Sequential analyses give the researcher a tool to minimize sample size and "peek" at incoming results without invalidating the test statistics o increasing the false positive rate. </li> <li><a href="https://www.stat.berkeley.edu/~winston/sop-safety-net.pdf" rel="nofollow">Standard Operating Procedures: A Safety Net for Pre-Analysis Plans</a> </li> <li>SOPs allow you to provide rationale for your decisions by citing a document that lives outside of your preregistration. This keeps preregs concise, and serves as a lab notebook of "lessons learned" over many years. </li> <li><a href="https://psyarxiv.com/785qu/" rel="nofollow">The Psychological Science Accelerator: Advancing Psychology through a Distributed Collaborative Network</a></li> <li><a href="https://link.springer.com/article/10.1007/s10654-016-0149-3" rel="nofollow">Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations</a></li> <li><a href="http://psycnet.apa.org/record/2014-55649-001" rel="nofollow">Enhancing transparency of the research process to increase accuracy of findings: A guide for relationship researchers</a> (<a href="https://etiennelebel.com/documents/cl&l%282014,pr%29.pdf" rel="nofollow">OA</a>)</li> </ul> <h3>Training Resources</h3> <ul> <li><a href="https://cos.io/our-services/training-services/cos-training-tutorials/" rel="nofollow">COS webinars</a></li> <li><a href="https://datacarpentry.org/lessons/" rel="nofollow">Data carpentry</a></li> <li><a href="https://www.coursera.org/learn/statistical-inferences" rel="nofollow">Improving your statistical inference</a> by Daniel Lakens</li> <li>NIH <a href="https://www.nigms.nih.gov/training/pages/clearinghouse-for-training-modules-to-enhance-data-reproducibility.aspx" rel="nofollow">Clearinghouse for Training Modules to Enhance Data Reproducibility</a></li> <li><a href="http://help.osf.io/m/bestpractices" rel="nofollow">Best practices in open science</a></li> </ul> <h3>Split samples, holdout data, or training and validation data sets</h3> <ul> <li>“<a href="http://www.nber.org/papers/w23544" rel="nofollow">Split-Sample Strategies for Avoiding False Discoveries</a>,” by Michael L. Anderson and Jeremy Magruder (<a href="https://are.berkeley.edu/~jmagruder/split-sample.pdf" rel="nofollow">ungated here</a>)</li> <li>“<a href="http://www.nber.org/papers/w21842" rel="nofollow">Using Split Samples to Improve Inference on Causal Effects</a>,” by Marcel Fafchamps and Julien Labonne (<a href="https://julienlabonne.files.wordpress.com/2017/06/sample_split_simulations_web.pdf" rel="nofollow">ungated and updated here</a>)</li> <li><a href="http://science.sciencemag.org/content/349/6248/636" rel="nofollow">The reusable holdout: Preserving validity in adaptive data analysis</a></li> </ul> <hr> <h2>Attitudes about open science</h2> <p>The following studies report on opinions and self-reported frequencies of various open science practices. All include links to the questionnaires and collected data.</p> <ul> <li><a href="https://www.jstor.org/stable/10.1525/jer.2007.2.4.3?seq=1#page_scan_tab_contents" rel="nofollow">Normative Dissonance in Science: Results from a National Survey of U.S. Scientists</a></li> <li>Norms of behavior in scientific research represent ideals to which most scientists subscribe. Our analysis of the extent of dissonance between these widely espoused ideals and scientists' perceptions of their own and others' behavior is based on survey responses from 3,247 [scientists]. We found substantial normative dissonance, particularly between espoused ideals and respondents' perceptions of other scientists' typical behavior. Also, respondents on average saw other scientists' behavior as more counternormative than normative. ... The high levels of normative dissonance documented here represent a persistent source of stress in science.</li> <li><a href="http://journals.sagepub.com/doi/abs/10.1177/2515245918757427" rel="nofollow">Why Do Some Psychology Researchers Resist Adopting Proposed Reforms to Research Practices? A Description of Researchers’ Rationales</a></li> <li>Our results suggest that (a) researchers have adopted some of the proposed reforms (e.g., reporting effect sizes) more than others (e.g., preregistering studies) and (b) rationales for not adopting them reflect a need for more discussion and education about their utility and feasibility.</li> <li><a href="http://psycnet.apa.org/fulltext/2017-18565-001.html" rel="nofollow">The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse?</a> </li> <li><a href="https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970" rel="nofollow">1,500 scientists lift the lid on reproducibility</a> | <a href="https://figshare.com/articles/Nature_Reproducibility_survey/3394951/1" rel="nofollow">Questionnaire and Data</a></li> <li>90% of scientists feel that there is a significant or slight reproducibility crisis. 3% feel that there is no crisis.</li> <li><a href="http://%20http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0189311" rel="nofollow">Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers</a> | <a href="https://zenodo.org/record/439531" rel="nofollow">Materials and data</a></li> <li><a href="https://figshare.com/articles/The_State_of_Open_Data_Report/4036398" rel="nofollow">The State of Open Data</a> | <a href="https://figshare.com/articles/Open_Data_Survey/4010541" rel="nofollow">Data and Survey</a></li> <li><a href="https://www.elsevier.com/__data/assets/pdf_file/0004/281920/Open-data-report.pdf" rel="nofollow">Open Data: The Researcher Perspective</a> | <a href="https://data.mendeley.com/datasets/bwrnfb4bvh/1" rel="nofollow">Data and Survey</a> </li> </ul>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.