Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# General Information > **Assessing the replication landscape in experimental linguistics** The study splits into two parts answering separate but related questions with regards to replications on the journal and on the study level, respectively: ***1) How often do journals mention the term replicat\*?*** This part of the study comprised a string matching technique with which we analysed over 50.000 articles from 98 journals. We found that 382 of 8437 articles that presented an experimental investigation mentioned the term replicat\* which results in a mention rate of 4.5% across experimental linguistic articles. This rate of replication mention varies across journals (almost half of all journals did not mention the term in any of their articles) and the median mention rate across journals is 1.7% (SD=3.3). ***2) How many articles containing the term replicat\* are actual replications?*** For this part of the study, we manually coded 274 articles and checked for each whether and which kind of replication it contained. We also noted which kinds of changes were made in the design choices compared to the initial study and coded for factors like author overlap, language under investigation and citation counts. We found that 151 of 262 articles (which were indeed experimental studies) contained at least one replication. Out of these, we categorized 86 (57%) as conceptual, 56 (37.1%) as partial and only 11 (7.3%) as direct replications. More details can be obtained from the manuscript as well as the supplementary material in the data component. # Data and File Overview ## Data component ### analysis `analysis` contains two files: [`01_BayesianAnalysis.R`][1] contains the two preregistered statistical analyses. [`02_Plots.R`][2] is the R code that generates the figures used to visualize our main findings. ### data `data` contains raw data and models. It contains the mention rates of the 98 journals in the file [`mention.csv`][3], the journals coded for their submission guidelines, open access publication options and impact factors in the file [`guidelines.csv`][4] and the coded articles in the file [`coded_updated.csv`][5]. [`Sample_journals.csv`][7] provides an overview of all journals in our sample. The models are the preregistered model [`repl_mention1_mdl.RDS`][16], a revised model (according to reviewer's suggestion) [`repl_mention2_mdl.RDS`][17] that codes open access factor as a categorical variable and standardizes journal impact factor and a third model using a zero-inflated binomial regression [`repl_mention3_mdl.RDS`][18] that also originated in the review process to address a reviewer's concerns about many 0 replication counts per journal. ### plots `plots` contains the three figures used to visualize our main findings. [`Figure1.pdf`][8] shows the propotion of replicat\* mentions across all journals from our superset that exhibited at least one mention of the term along with the ratio of experimental articles for each journal. [`Figure2.pdf`][9] shows the rate of mentioning the term replicat\* across sampled journals plotted against their journal impact factor along with the model predictions and 95% credible intervals of our first model. [`Figure3.pdf`][10] shows the development of the amount and types of replication studies published over time. ## Preregistration component In the preregistration component you can inspect the file [`Preregistration.pdf`][11] with which we preregistered our study on 2021-03-08. [`Sample_journals.xlsx`][12] and [`Sample_articles.xlsx`][13] contain the full sample of journals and articles, respectively, we selected for investigation. With the help of the script [`random_jml_articles.R`][14] we decided on 50 out of the 114 articles from Journal of Memory and Language to code (note that we refrained from this decision during the review process and instead coded all articles in the sample). The [`Coding_Sheet.xlsx`][15] defines how we planned to code the articles in our manual analysis. [1]: http://osf.io/6jydp/ [2]: http://osf.io/d4v8g/ [3]: http://osf.io/yefr8/ [4]: http://osf.io/rukc7/ [5]: https://osf.io/jkqsv/ [7]: http://osf.io/c9d26/ [8]: http://osf.io/8kdn7/ [9]: http://osf.io/pumnv/ [10]: http://osf.io/z7as9/ [11]: http://osf.io/a5xd7/ [12]: http://osf.io/q2e9k/ [13]: http://osf.io/f3yp8/ [14]: http://osf.io/6vfpe/ [15]: http://osf.io/ct2xj/ [16]: https://osf.io/jekcb [17]: https://osf.io/c3zbf [18]: https://osf.io/9yv5r
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.