Main content



Loading wiki pages...

Wiki Version:
# Welcome to the ReproducibiliTea Toronto Journal Club! This journal club is based at York University, Toronto, Canada. We aim to be as inclusive as possible. If you want to join our conversation during a particular meeting, then **just follow the link on the [Google Calendar event][1] to join the bi-weekly meetings!** **For Summer 2024, the zoom link will be:** **For any general inquirites, or to request a specific article/source to discuss, please contact us at** **To be added to our mailing list (to receive emails to the club), please complete the following form:** *** ## **Readings** ### **Summer 2024** For the summer term, we have an excellent lineup of interesting talks, three of which will be facilitated by members of the previous semester's journal club! The topics are quite diverse, and include: running multiverse analyses in R, mis-citation in psychology articles, open-science platforms, expected non-results vs. HARKing, and the theoretical implications behind direct replications! We hope you find the selection to your liking! **Click [here][2] to read the articles! Below is a summary of each one:** **Week 1** - June 6, 2024 @ 3:00 PM EST *Increasing transparency through a multiverse analysis* This article introduces multiverse analyses. Researchers normally have a(n) (often unchecked) degree of freedom for the decisions they make: how you omit outliers, if at all? Which statistical model(s) should be run? How can variables be operationalized and/or computed? The idea behind the multiverse is that it runs the statistical model based on all of the possible configurations and decisions, providing a clear picture for how robust the final results really are. **Week 2** - June 20, 2024 @ 3:00 PM EST *The problem of mis-citation in psychological science: Righting the ship* Researchers cite articles to provide evidence in favor of their claims, and to support the basis behind their own research. But sometimes, articles might be *mis-cited*; that is, a citation might be inaccurately used or portrayed. Is this a problem of concern in psychology? According to Cobb et al., mis-citations are consequential, since they mislead and stifle research progress in the field. They argue that mis-citation should be considered a questionable research practice, and that when citing articles, psychologists should take care in ensuring that they accurately portray the cited sources. **Week 3** - July 4, 2024 @ 3:00 PM EST *Various open-science platforms:,, PsyArXiv.* In this week, Ege Kamber, a member, will explore open-science platforms, including the open-science framework (OSF), as predicted, PsyArXiv, as well as many others! **Week 4** - July 18, 2024 @ 3:00 PM EST *HARKing: Hypothesizing After the Results are Known* and *HARKing can be good for science* The first article introduces the questionable research practice known as HARKing: hypothesizing after results are known. This occurs when researchers determine their results, then modify their manuscript in a way that made it seem as if the results are what they had predicted all along. The second article argues for a broader definition of HARKing: creating new hypotheses based on the found results. It further contends that hypothesizing after results are known, when done in an appropriate context, is essential for scientific progress. The facilitator, Lorne Hartman, will briefly discuss these articles, as well as a case illustration from his own research. **Week 5** - August 1, 2024 *The alleged crisis and the illusion of exact replication* This article discusses whether direct replications in psychology should be considered the essential "litmus test" for replicability. It contends that direct replication, while perceived as theoretically ideal, are in fact impossible to perform. Harley Glassman will facilitate this discussion. ### **Winter 2024** For the Winter semester, we'll be chatting about more specific topics as they relate to reproducibility, primarily the incentive structure in science, null hypothesis significance testing (NHST), and why common arguments made against reproducibility are misguided. **Once more, all articles are made freely available on the shared [Google Drive Folder][3]. Winter readings can be found within the "Winter" sub-folder. Below is a summary of each reading.** **Week 1** - Jan 23, 2024 @ 2:30 PM EST *The incentive structure in science: Scientific Utopia II: Restructuring Incentives and Practices to Promote Truth Over Publishability* A paper that explains what's wrong with the incentive structure in science, and how the structure can be changed to promote more open-scientific attitudes and practices. **Week 2** - Feb 6, 2024 @ 2:30 PM EST *Mindless Statistics* This paper describes the (disturbing) origins of null hypothesis significance testing (NHST), the statistical framework that dominates psychology. It makes the evocative argument that NHST is treated less like a statistical framework and more like a mindless ritual. **Week 3** - Mar 5, 2024 @ 2:30 PM EST *The Earth is Round (p < .05)* This classic paper describes the issues with null hypothesis significance testing as a framework in psychology, such as in how it makes a logically invalid argument and is highly prone to misinterpretations. **Week 4** - Mar 19, 2024 @ 2:30 PM EST *Life after NHST: How to Describe Data Without "p-ing" Everywhere* This paper explores ways of describing data without using NHST, one potential manner in which psychologists can move away from NHST in their own research. **Week 5** - Apr 2, 2024 @ 2:30 PM EST *Is the Replication Crisis Overblown? Three Arguments Examined* Paper examining potential counterarguments to the idea that the reproducibility crisis is overexaggerated, and which provides rebuttals to said common arguments. In doing so, it reinforces the importance of open science and reproducibility. ### **Fall 2023** For the re-launch of the ReproducibiliTea Toronto Club, we'll be going through a lot of the classic reproducibility papers, along with some goodies sprinkled in! :) **All articles are made freely available on the shared [Google Drive Folder][4]. Below is a summary of each reading.** **Week 1** - Oct 18, 2023 @ 10:00 AM EST 1. *A Manifesto for Reproducible Science* 2. *Estimating the Reproducibility of Psychology* Two excellent papers to get everyone acquainted with the idea and importance of reproducibility. **Week 2** - Nov 1, 2023 @ 10:00 AM EST *Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology* Paper on a recent attempt at computationally reproducing the statistical analyses within several journal articles. **Week 3** - Nov 15, 2023 @ 10:00 AM EST *Measuring the Prevalence of Questionable Research Practices with Incentives for Truth Telling* Classic paper about the prevalence of questionable research practices (QRPs) in psychology and what it means for the discipline. **Week 4** - Nov. 29, 2023 1. *UK Psychology PhD researchers’ knowledge, perceptions, and experiences of open science* 2. *Attitudes Toward Open Science and Public Data Sharing* Papers exploring what students (paper 1) and professors (paper 2) think about the ideas of open science and reproducibility. **Week 5** - Dec. 13, 2023 *Is the Replication Crisis Overblown? Three Arguments Examined* Paper examining potential counterarguments to the idea that the reproducibility crisis is overexaggerated. *** ## Readings from Previous Sessions ### Fall 2020-Winter 2021 **Week 1** - Oct 30, 2020 @ 2:30 PM EST *When Personality Psychologists are High* Post from Dr. Ulrich Schimmack’s Blog about Replicability [][5] **Week 2** - Nov 13, 2020 @ 2:30 PM EST *Personality Measurement with the Big Five Inventory* [][6] Post from Dr. Ulrich Schimmack’s Blog about Replicability **Week 3** - Nov 27, 2020 @ 2:30 PM EST *When a “Replication” Is Not a Replication. Commentary: Sequential Congruency Effects in Monolingual and Bilingual Adult* [][7] Supplemental reading: [Null results in bilingualism research:What they tell us and what they don’t][8] Many of our readings this semester were pulled from a fantastic [Twitter thread][9] by [@zerdeve][10] highlighting some non-mainstream work on reproducibility, open science, replication crisis, meta-science by women **Week 4** - Jan 22, 2021 @ 2:30 PM EST *Paths in strange spaces: A comment on preregistration*: Part 1 A blog post by Daniel Navarro preserved as a preprint on psyarxiv. [][11] **Week 5** - Feb 5, 2021 @ 2:30 PM EST *Paths in strange spaces: A comment on preregistration*: Part 2 A blog post by Daniel Navarro preserved as a preprint on psyarxiv. [][12] **Week 6** - Feb 19, 2021 @ 1:oo PM EST Workshop on conducting Monte Carlo Simulations using the SimDesign package. [][13] **Week 7** - Mar 5, 2021 @ 2:3o PM EST *Credibility of preprints: an interdisciplinary survey of researchers* [][14] **Week 8** - Mar 19, 2021 @ 2:30 PM EST *Mapping the discursive dimensions of the reproducibility crisis: A mixed methods analysis* [][15] - Note might have to be read within your web broswer as the pdf had issues opening using some software. **Week 9** - Apr 2, 2021 @ 2:3o PM EST *An empirical analysis of journal policy effectiveness for computational reproducibility* [][16] *** ### Winter 2020- Summer 2020 **Week 1** - Jan 17, 2020 @ 2:30 PM *A Manifesto for Reproducible Science* The Problem Defined: The general overview [][17] **Week 2** - Jan 24, 2020 @ 2:30 PM *False Positive Psychology* Examining Analytic Flexibility And why it is a problem [][18] **Week 3** - Jan 31, 2020 @ 2:30 PM *Measuring the Prevalence of Questionable Research Practices with Incentives for Truth Telling* Questionable Research Practices: are they really that common? And why are they problematic? [][19] **Week 4** - Feb 7, 2020 @ 2:30 PM *Estimating the Reproducibility of Psychological Science* Reproducibility Now: Why many studies are not reproducible. [][20] **Week 5** - Feb 14, 2020 @ 2:30 PM *Is the Replicability Crisis Overblown? Three Arguments Examined* How has the debate gone to far? Things will just turn out fine. [][21] **Week 6** - Feb 28, 2020 @ 2:30 PM *Open Science: What, Why and How Open Data and Materials* [][22] **Week 7** - Mar 6, 2020 @ 2:30 PM *The Natural Selection of Bad Science* And what about the future? [][23] **Week 8** - Mar 13, 2020 @ 2:30 PM *The Preregistration Revolution* Preregistration as a solution [][24] **Week 9** - Mar 20, 2020 @ 2:30 PM *Introduction to Monte Carlo Simulation* A brief workshop about running Monte Carlo Simulations using the SimDesign package. [][25] **Week 10** - Mar 27, 2020 @ 2:30 PM *The garden of forking paths* Researcher degrees of freedom [][26] **Week 11** - Apr 3, 2020 @ 2:30 PM *Increasing Transparency Through a Multiverse Analysis* [][27] **Week 12** - Apr 17, 2020 @ 1:30 PM *Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them* [][28] **Week 13** - May 1, 2020 @ 1:30 PM *Good enough practices in scientific computing* [][29] **Week 14** - May 8, 2020 @ 1:30 PM (GMT-4) *How (and Whether) to Teach Undergraduates About the Replication Crisis in Psychological Science* [][30] **Week 15** - May 15, 2020 @ 1:30 PM (GMT-4) *Fallibility in Science: Responding to Errors in the Work of Oneself and Others* [][31] **Week 16** - May 22, 2020 @ 1:30 PM (GMT-4) *Practical Solutions for Sharing Data and Materials From Psychological Research* [][32] **Week 17** - June 5, 2020 @ 1:30 PM (GMT-4) *What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science* [][33] *** ### **ReproducibiliTea Toronto Mini Series:** A "conversation" between Tal Yarkoni and Daniël Lakens **Part 1 of 3** - June 12, 2020 @ 1:30 PM (GMT-4) (Week 18) [*The Generalizability Crisis*][34]; a preprint by Tal Yarkoni **Part 2 of 3** - June 19, 2020 @ 1:30 PM (GMT-4) (Week 19) [*Review of "The Generalizability Crisis" by Tal Yarkoni*][35]; a blog post by Daniël Lakens **Part 3 of 3** - June 26, 2020 @ 1:30 PM (GMT-4) (Week 20) [*Induction is not optional (if you’re using inferential statistics): reply to Lakens*][36]; a blog post by Tal Yarkoni *** [1]: [2]: [3]: [4]: [5]: [6]: [7]: [8]: [9]: [10]: [11]: [12]: [13]: [14]: [15]: [16]: [17]: [18]: [19]: [20]: [21]: [22]: [23]: [24]: [25]: [26]: [27]: [28]: [29]: [30]: [31]: [32]: [33]: [34]: [35]: [36]:
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.