Open Science Literature


Loading wiki pages...

Wiki Version:
<p><strong>Open Science in the Literature</strong></p> <hr> <p><em>These Zotero libraries contains a growing list of empirical and theoretical papers about open science.</em></p> <ul> <li><a href="" rel="nofollow">Open Science in the Scientific Literature</a></li> <li><a href="" rel="nofollow">Open Science: News and Editorials</a></li> </ul> <p>Another great collection of items is in this <a href="" rel="nofollow">Reproducibility Bibliography</a>. </p> <p><em>Below is an annotated selection of papers related to the need for more transparent research and the effectiveness of such research practices. To add to this collection, please email David Mellor</em> </p> <p>@[toc]</p> <hr> <h2>Benefits of Transparency</h2> <p>Transparent and reproducible research practices help the researcher better organize their work and become more efficient, increases the impact and citation rate of their work, and of course helps the scientific community more quickly build upon preliminary discoveries.</p> <h3>Citation Advantages</h3> <ul> <li><a href="" rel="nofollow">The citation advantage of linking publications to research data</a></li> <li>Articles in which data were made available in a repository showed a clear citation advantage of up to 25%.</li> <li><a href="" rel="nofollow">Sharing Detailed Research Data Is Associated with Increased Citation Rate</a></li> <li>"The 48% of trials with publicly available microarray data received 85% of the aggregate citations. Publicly available data was significantly (p = 0.006) associated with a 69% increase in citations..."</li> <li><a href="" rel="nofollow">On the Citation Advantage of linking to data: Astrophysics</a> </li> <li>"I find that the Citation Advantage presently (at the least since 2009) amounts to papers with links to data receiving on the average 50% more citations per paper per year, than the papers without links to data"</li> <li><a href="" rel="nofollow">Data reuse and the open data citation advantage</a></li> <li>"...we found that studies that made data available in a public repository received 9% (95% confidence interval: 5% to 13%) more citations than similar studies for which the data was not made available."</li> <li><a href="" rel="nofollow">Linking to Data - Effect on Citation Rates in Astronomy</a></li> <li>"... articles with data links on average acquired 20% more citations (compared to articles without these links) over a period of 10 years."</li> <li><a href="" rel="nofollow">Altmetric Scores, Citations, and Publication of Studies Posted as Preprints</a></li> <li>"Articles with a preprint received higher Altmetric scores and more citations than articles without a preprint."</li> </ul> <h3>Community Benefits</h3> <ul> <li>"<a href="" rel="nofollow">Trust and Mistrust in Americans’ Views of Scientific Experts</a>"</li> <li>Increase in research scientists has increased in recent years, and "...a majority of U.S. adults (57%) say they trust scientific research findings more if the researchers make their data publicly available. Another 34% say that makes no difference, and just 8% say they are less apt to trust research findings if the data is released publicly."</li> <li><a href="" rel="nofollow">Real-Time Sharing of Zika Virus Data in an Interconnected World</a></li> <li>A case study demonstrating how real time data sharing benefited researchers and possibly patients.</li> <li><a href="" rel="nofollow">A quick release of genome data from a deadly E. coli breakout lead to faster and better health benefits.</a> </li> <li><a href="" rel="nofollow">A long journey to reproducible results</a></li> <li>Replicating our work took four years and 100,000 worms but brought surprising discoveries, explain Gordon J. Lithgow, Monica Driscoll and Patrick Phillips.</li> <li><a href="" rel="nofollow">Benefits of open and high-powered research outweigh costs.</a> (<a href="" rel="nofollow">OA</a>)</li> </ul> <h3>Costs of closed practices ###</h3> <ul> <li><a href="" rel="nofollow">The war over supercooled water</a></li> <li>How a hidden coding error fueled a seven-year dispute between two of condensed matter’s top theorists (which ended after code became open). </li> <li><a href="" rel="nofollow">The Economics of Reproducibility in Preclinical Research</a></li> <li>"An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28B/year spent on preclinical research that is not reproducible—in the United States alone."</li> </ul> <hr> <h2>Data sharing policies and practices</h2> <ul> <li><a href="" rel="nofollow">Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency</a></li> <li>Without imposing any mandates on authors, a journal was able to substantially increase the rate of data sharing by allowing them the opprotunity to signal these actions to their peers by using <a href="" rel="nofollow">Open Practice Badges</a>. </li> <li><a href="" rel="nofollow">Mandated data archiving greatly improves access to research data</a> (<a href="" rel="nofollow">preprint</a>)</li> <li>Policies that merely encourage data archiving do not affect actions. Policies that mandate data archiving are associated with higher rates of such actions, especially when combined with data accessibility statements. </li> <li><a href="" rel="nofollow">Are We Wasting a Good Crisis? The Availability of Psychological Research Data after the Storm</a><ul> <li>Data sharing policies that require researchers to make data available only when requested are ineffective.</li> </ul> </li> <li><a href="" rel="nofollow">An empirical analysis of journal policy effectiveness for computational reproducibility</a></li> <li>"We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%. We find this policy—author remission of data and code postpublication upon request—an improvement over no policy, but currently insufficient for reproducibility."</li> <li><a href="" rel="nofollow">The ethics of secondary data analysis: Considering the application of Belmont principles to the sharing of neuroimaging data</a><ul> <li>Applicable to any non-clinical human subjects research. The authors lay out the Belmont principles of justice, respect for persons, and beneficence, and then apply those principles into responsible data sharing and privacy steps, finally ending up with how these should apply during data sharing decisions. </li> </ul> </li> <li><a href="" rel="nofollow">Data policies of highly-ranked social science journals.</a></li> <li>"We conclude that a little more than half of the journals in our study have data policies. A greater share of the economics journals have data policies and mandate sharing, followed by political science/international relations and psychology journals." </li> <li><a href="" rel="nofollow">Data sharing in PLOS ONE: An analysis of Data Availability Statements</a></li> <li>The proportion of articles in PLOS ONE with data availability statements has increased. The proportion of articles that comply with desired policy (shared data in persistent repository) is relatively low but increasing. </li> <li><a href="" rel="nofollow">Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition.</a></li> <li>The authors found that a data sharing mandate was effective at increasing the rates of data availability, with some exceptions. Approximately two-thirds of the data could be used to computational replicate the reported findings, though that often required assistance from the original authors. </li> <li><a href="" rel="nofollow">Authors of trials from high-ranking anesthesiology journals were not willing to share raw data</a></li> <li>"Among 619 randomized controlled trials published in seven high-impact anesthesiology journals, only 24 (4%) had data sharing statements in the manuscript. When asked to share de-identified raw data from their trial, authors of only 24 (4%) manuscript shared data. Among 24 trials with data sharing statements in the manuscript, only one author actually shared raw data."</li> </ul> <hr> <h2>Reporting Standards, Guidelines, and Checklists</h2> <ul> <li><a href="" rel="nofollow">Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review</a><ul> <li>"The results of this review suggest that journal endorsement of CONSORT may benefit the completeness of reporting of RCTs they publish."</li> </ul> </li> <li><a href="" rel="nofollow">Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor</a><ul> <li>Few published studies or applications for animal experiments report key details of the experimental protocols. </li> </ul> </li> <li><a href="" rel="nofollow">Findings of a retrospective, controlled cohort study of the impact of a change in Nature journals' editorial policy for life sciences research on the completeness of reporting study design and execution.</a> </li> <li><a href="" rel="nofollow">A checklist is associated with increased quality of reporting preclinical biomedical research: A systematic review</a></li> <li><a href="" rel="nofollow">Two Years Later: Journals Are Not Yet Enforcing the ARRIVE Guidelines on Reporting Standards for Pre-Clinical Animal Studies</a></li> <li><a href="" rel="nofollow">ARRIVE has not ARRIVEd: Support for the ARRIVE (Animal Research: Reporting of in vivo Experiments) guidelines does not improve the reporting quality of papers in animal welfare, analgesia or anesthesia</a></li> <li><a href="" rel="nofollow">Reducing waste from incomplete or unusable reports of biomedical research</a></li> <li>"..inadequate reporting occurs in all types of studies—animal and other preclinical studies, diagnostic studies, epidemiological studies, clinical prediction research, surveys, and qualitative studies. In this report, and in the Series more generally, we point to a waste at all stages in medical research."</li> </ul> <hr> <h2>Publication Bias and Effects of Preregistration</h2> <ul> <li><a href="" rel="nofollow">The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases</a></li> <li>"The median effect of studies published without pre-registration (i.e., potentially affected by those biases) of Mdnr = 0.36 stands in stark contrast to the median effect of studies published with pre-registration (i.e., very unlikely to be affected by the biases) of Mdnr = 0.16. Hence, if we consider the effect size estimates from replication studies or studies published with pre-registration to represent the true population effects we notice that, overall, the published effects are about twice as large."</li> <li><a href="" rel="nofollow">Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review</a></li> <li>Research that reports the results of statistically significant findings is more likely to be published than null findings. </li> <li><a href="" rel="nofollow">Association between trial registration and treatment effect estimates: a meta-epidemiological study</a></li> <li>"Lack of trial prospective registration may be associated with larger treatment effect estimates."</li> <li><a href="" rel="nofollow">Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials ESORT)</a></li> <li>"Among published RCTs, there was little evidence of a difference in positive study findings between registered and non-registered clinical trials, even with stratification by timing of registration."</li> <li><a href="" rel="nofollow">Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time</a></li> <li>"The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by <a href="" rel="nofollow"></a>, may have contributed to the trend toward null findings."</li> <li><a href="" rel="nofollow">The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles</a> </li> <li>"...from dissertation to journal article, the ratio of supported to unsupported hypotheses more than doubled (0.82 to 1.00 versus 1.94 to 1.00)."</li> <li><a href="" rel="nofollow">Registered trials report less beneficial treatment effects than unregistered ones: a meta-epidemiological study in orthodontics</a></li> <li>"Signs of bias from lack of trial protocol registration were found with non-registered trials reporting more beneficial intervention effects than registered ones."</li> <li><a href="" rel="nofollow">Registered reports: an early example and analysis</a></li> <li>"Although [Registered Reports] is usually seen as a relatively recent development, we note that a prototype of this publishing model was initiated in the mid-1970s by parapsychologist Martin Johnson in the European Journal of Parapsychology (EJP). A retrospective and observational comparison of Registered and non-Registered Reports published in the EJP during a seventeen-year period provides circumstantial evidence to suggest that the approach helped to reduce questionable research practices." </li> <li><a href="" rel="nofollow">Publication bias in the social sciences: Unlocking the file drawer</a> (<a href="" rel="nofollow">preprint</a>)</li> <li>The authors find a strong bias toward statistically significant findings in reported outcomes, even within a body of work where methodology and rigor did not vary. </li> <li><a href="" rel="nofollow">The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: the case of depression</a></li> <li>Access to unpublished results via the FDA reviews allowed the authors to discover size of publication bias within this field. </li> <li><a href="" rel="nofollow">Outcome reporting bias in randomized-controlled trials investigating antipsychotic drugs</a><ul> <li>"Of the 48 RCTs [from <a href="" rel="nofollow"></a>], 85% did not fully adhere to the prespecified outcomes [in the published article]." </li> </ul> </li> <li><a href="" rel="nofollow">P values in display items are ubiquitous and almost invariably significant: A survey of top science journals</a></li> <li>"...the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome."</li> <li><a href="" rel="nofollow">Association of Trial Registration With Reporting of Primary Outcomes in Protocols and Publications</a></li> <li>Discrepancies between the protocol and publication were more common in unregistered trials (6 of 11 trials [55%]) than registered trials (3 of 47 [6%]) (P &lt; .001). Only 1 published article acknowledged the changes to primary outcomes.</li> <li><a href="" rel="nofollow">Open Science challenges, benefits and tips in early career and beyond</a></li> <li>"We assessed the percentage of hypotheses that were not supportedand compared it to percentages previously reported within the wider literature. 61% of the studies we surveyed did not support their hypothesis (<a href="" rel="nofollow"></a>)" See Nature News article <a href="" rel="nofollow">here</a>.</li> </ul> <hr> <h2>Questionable research practices</h2> <ul> <li><a href="" rel="nofollow">HARKing: Hypothesizing After the Results are Known</a><ul> <li>Using a dataset to generate and then test a hypothesis is circular reasoning that invalidates the test statistics. </li> </ul> </li> <li><a href="" rel="nofollow">Why Most Published Research Is False</a><ul> <li>Small sample sizes, effect sizes, and unreported data analysis flexibility invalidate most statistical tests. </li> </ul> </li> <li><a href="" rel="nofollow">False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant</a><ul> <li>The authors show how even absurd claims can be supported by presenting only a subset of analyses and provide six concrete solutions to this problem. </li> </ul> </li> <li><a href="" rel="nofollow">Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions</a><ul> <li>Higginson and Munafo demonstrate how researchers are rewarded for underpowered research and how valuation of different research methods can affect researchers' self-interested actions. </li> </ul> </li> <li><a href="" rel="nofollow">Questionable research practices in ecology and evolution</a></li> <li>"...we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing)."</li> <li><a href="" rel="nofollow">Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling</a></li> <li>"...we found that the percentage of respondents who have engaged in questionable practices was surprisingly high" </li> <li><a href="" rel="nofollow">The natural selection of bad science</a></li> <li>"The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. ... In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modelling.</li> </ul> <hr> <h2>The Reproducibility Crisis</h2> <ul> <li><strong>Many Labs 1</strong> "<a href="" rel="nofollow">Investigating Variation in Replicability</a>" A “Many Labs” Replication Project</li> <li>"This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently. One effect – imagined contact reducing prejudice – showed weak support for replicability. And two effects – flag priming influencing conservatism and currency priming influencing system justification – did not replicate. We compared whether the conditions such as lab versus online or US versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect."</li> <li>"<strong>Many Labs 2</strong>: <a href="" rel="nofollow">Investigating Variation in Replicability Across Sample and Setting</a>"</li> <li>"Cumulatively, variability in observed effect sizes was more attributable to the effect being studied than the sample or setting in which it was studied."</li> <li>"<strong>Many Labs 3</strong>: <a href="" rel="nofollow">Evaluating participant pool quality across the academic semester via replication</a>" </li> <li>"The university participant pool is a key resource for behavioral research, and data quality is believed to vary over the course of the academic semester. This crowdsourced project examined time of semester variation in 10 known effects, 10 individual differences, and 3 data quality indicators over the course of the academic semester in 20 participant pools (N = 2696) and with an online sample (N = 737)."</li> <li><a href="" rel="nofollow">Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say "Usually Not"</a></li> <li>Using original data and code when available, the authors were able to computationally reproduce less than half of the original findings from their target sample of 67 studies. </li> <li><a href="" rel="nofollow">Evaluating replicability of laboratory experiments in economics</a> </li> <li>Of 18 experimental studies published in economics were, 11 (61%) replicated primary findings. </li> <li><a href="" rel="nofollow">Estimating the reproducibility of psychological science</a> (<a href="" rel="nofollow">preprint</a>)<ul> <li>The authors attempted to replicate 100 studies from the published literature using higher powered designs and original materials and were able to replicate less than 40 original findings. </li> </ul> </li> <li>The <a href="" rel="nofollow">Reproducibility Project: Cancer Biology</a></li> <li><a href="" rel="nofollow">Drug development: Raise standards for preclinical cancer research</a></li> <li>Commerical attempts to confirm 53 landmark, novel studies resulted in 6 (11%) confirmed research findings. </li> <li><a href="" rel="nofollow">Believe it or not: how much can we rely on published data on potential drug targets?</a></li> <li>Of 67 target-validation projects in oncology and cardiovascular medicine conducted at Bayer, 14 projects (20%) showed results that matched with published findings, but were highly inconsistent in 43. </li> <li><a href="" rel="nofollow">Repeatability of published microarray gene expression analyses</a> </li> <li>In this study, Ioannidis et. al. attempted to repeat the analyses of 18 experiments using data from the original studies. The results of eight experiments were reproduced or partially reproduced. </li> <li><a href="" rel="nofollow">Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015</a></li> <li>"We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications... We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size." </li> <li><a href="" rel="nofollow">Estimating the Reproducibility of Experimental Philosophy</a></li> <li>"Drawing on a representative sample of 40 x-phi studies published between 2003 and 2015, we enlisted 20 research teams across 8 countries to conduct a high-quality replication of each study in order to compare the results to the original published findings. We found that x-phi studies – as represented in our sample – successfully replicated about 70% of the time. "</li> <li>"On the reproducibility of science: unique identification of research resources in the biomedical literature"</li> <li><a href="" rel="nofollow">50% of scientific resources used in previously published articles were unidentifiable</a></li> <li>"The Economics of Reproducibility in Preclinical Research"</li> <li><a href="" rel="nofollow">$28 billion annually in the US alone wasteful spent on research that cannot be replicated.</a> </li> <li><a href="" rel="nofollow">Rate and success of study replication in ecology and evolution</a><ul> <li>"[A]pproximately 0.023% of ecology and evolution studies are described by their authors as replications."</li> </ul> </li> </ul> <hr> <h2>Evaluating Journals Policies</h2> <ul> <li><a href="" rel="nofollow">Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices</a></li> <li>"Thirty three journals [out of 1151] (3%) stated in their aims or instructions to authors that they accepted replications."</li> <li><a href="" rel="nofollow">Effect of impact factor and discipline on journal data sharing policies</a></li> <li>""[We analyzed] the data sharing policies of 447 journals across several scientific disciplines, including biology, clinical sciences, mathematics, physics, and social sciences. Our results showed that only a small percentage of journals require data sharing as a condition of publication..."</li> <li><a href="" rel="nofollow">Evaluation of Journal Registration Policies and Prospective Registration of Randomized Clinical Trials of Nonregulated Health Care Interventions</a></li> <li>"Few journals in behavioral sciences or psychology, nursing, nutrition and dietetics, rehabilitation, and surgery require prospective trial registration, and those with existing registration policies rarely enforce them; this finding suggests that strategies for encouraging prospective registration of clinical trials not subject to FDA regulation should be developed and tested."</li> </ul> <hr> <h2>Recommendations for Increasing Reproducibility</h2> <ul> <li><a href="" rel="nofollow">Practical Tips for Ethical Data Sharing</a> (<a href="" rel="nofollow">OA</a>)</li> <li>"This Tutorial provides practical dos and don’ts for sharing research data in ways that are effective, ethical, and compliant with the federal Common Rule.</li> <li><a href="" rel="nofollow">A Practical Guide for Transparency in Psychological Science</a></li> <li>"Here we provide a practical guide to help researchers navigate the process of preparing and sharing the products of their research (e.g., choosing a repository, preparing their research products for sharing, structuring folders, etc.)."</li> <li>"<a href="" rel="nofollow">Good enough practices in scientific computing</a>"</li> <li>"This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices... encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts..."</li> <li><a href="" rel="nofollow">Detecting and avoiding likely false-positive findings – a practical guide</a></li> <li><a href="" rel="nofollow">A manifesto for reproducible science</a></li> <li><a href="" rel="nofollow">The Preregistration Revolution</a></li> <li><a href="" rel="nofollow">An Agenda for Purely Confirmatory Research</a></li> <li><a href="" rel="nofollow">Striving for transparent and credible research: practical guidelines for behavioral ecologists</a></li> <li><a href="" rel="nofollow">Performing high-powered studies efficiently with sequential analyses</a></li> <li>Sequential analyses give the researcher a tool to minimize sample size and "peek" at incoming results without invalidating the test statistics o increasing the false positive rate. </li> <li><a href="" rel="nofollow">Standard Operating Procedures: A Safety Net for Pre-Analysis Plans</a> </li> <li>SOPs allow you to provide rationale for your decisions by citing a document that lives outside of your preregistration. This keeps preregs concise, and serves as a lab notebook of "lessons learned" over many years. </li> <li><a href="" rel="nofollow">The Psychological Science Accelerator: Advancing Psychology through a Distributed Collaborative Network</a></li> <li><a href="" rel="nofollow">Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations</a></li> <li><a href="" rel="nofollow">Enhancing transparency of the research process to increase accuracy of findings: A guide for relationship researchers</a> (<a href=",pr%29.pdf" rel="nofollow">OA</a>)</li> </ul> <h3>Training Resources</h3> <ul> <li><a href="" rel="nofollow">COS webinars</a></li> <li><a href="" rel="nofollow">Data carpentry</a></li> <li><a href="" rel="nofollow">Improving your statistical inference</a> by Daniel Lakens</li> <li>NIH <a href="" rel="nofollow">Clearinghouse for Training Modules to Enhance Data Reproducibility</a></li> <li><a href="" rel="nofollow">Best practices in open science</a></li> </ul> <h3>Split samples, holdout data, or training and validation data sets</h3> <ul> <li>“<a href="" rel="nofollow">Split-Sample Strategies for Avoiding False Discoveries</a>,” by Michael L. Anderson and Jeremy Magruder (<a href="" rel="nofollow">ungated here</a>)</li> <li>“<a href="" rel="nofollow">Using Split Samples to Improve Inference on Causal Effects</a>,” by Marcel Fafchamps and Julien Labonne (<a href="" rel="nofollow">ungated and updated here</a>)</li> <li><a href="" rel="nofollow">The reusable holdout: Preserving validity in adaptive data analysis</a></li> </ul> <hr> <h2>Attitudes about open science</h2> <p>The following studies report on opinions and self-reported frequencies of various open science practices. All include links to the questionnaires and collected data.</p> <ul> <li><a href="" rel="nofollow">Normative Dissonance in Science: Results from a National Survey of U.S. Scientists</a></li> <li>Norms of behavior in scientific research represent ideals to which most scientists subscribe. Our analysis of the extent of dissonance between these widely espoused ideals and scientists' perceptions of their own and others' behavior is based on survey responses from 3,247 [scientists]. We found substantial normative dissonance, particularly between espoused ideals and respondents' perceptions of other scientists' typical behavior. Also, respondents on average saw other scientists' behavior as more counternormative than normative. ... The high levels of normative dissonance documented here represent a persistent source of stress in science.</li> <li><a href="" rel="nofollow">Why Do Some Psychology Researchers Resist Adopting Proposed Reforms to Research Practices? A Description of Researchers’ Rationales</a></li> <li>Our results suggest that (a) researchers have adopted some of the proposed reforms (e.g., reporting effect sizes) more than others (e.g., preregistering studies) and (b) rationales for not adopting them reflect a need for more discussion and education about their utility and feasibility.</li> <li><a href="" rel="nofollow">The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse?</a> </li> <li><a href="" rel="nofollow">1,500 scientists lift the lid on reproducibility</a> | <a href="" rel="nofollow">Questionnaire and Data</a></li> <li>90% of scientists feel that there is a significant or slight reproducibility crisis. 3% feel that there is no crisis.</li> <li><a href="http://%20" rel="nofollow">Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers</a> | <a href="" rel="nofollow">Materials and data</a></li> <li><a href="" rel="nofollow">The State of Open Data</a> | <a href="" rel="nofollow">Data and Survey</a></li> <li><a href="" rel="nofollow">Open Data: The Researcher Perspective</a> | <a href="" rel="nofollow">Data and Survey</a> </li> </ul>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.