### Home

<p><strong><em>Corrigendum</em></strong></p> <p>We discovered errors in the computation of the binomial tests reported in the paper such that the computations inflated the p-values from each test. We report each error here:</p> <ul> <li>The binomial test on p. 2108 reports that 30 out of 45 participants made accurate judgments more often than chance. The p-value should be .036 (not &lt; .0001 as reported).</li> <li>The binomial test on p. 2109 reports that 36 out of 49 participants made accurate judgments more often than chance. The p-value should be .001 (not &lt; .0001 as reported).</li> <li>The binomial test on p. 2110 reports that 28 out of 46 participants made accurate judgments more often than chance. The p-value should be .184 (not &lt; .0001 as reported). This analysis reflects a qualitative change in the conclusions drawn from Experiment 3 in the paper: <strong><em>participants' pooled accuracies were not significantly different than chance.</em></strong> (Their accuracies as a function of the type of problem remain statistically reliable, however.)</li> </ul> <p>We regret any inconvenience as a result of these errors. The corrected analyses are reflected in the analysis scripts (the .R files) for each experiment in this OSF page.</p> <p><strong><em>What is "DR1", etc?</em></strong></p> <p>The names of the folders below (e.g., "Experiment 1") correspond to the descriptions in Kelly, Khemlani, and Johnson-Laird (under review). The code, data, and analysis scripts also reflect a separate abbreviation system used for tracking experiments. Hence, "Experiment 3" corresponds to "DR5", i.e., the <strong>5</strong>th experiment conducted for studying <strong>D</strong>urational <strong>R</strong>easoning.</p> <hr> <p><strong><em>How do you run an experiment from the code provided?</em></strong></p> <p>The experiments are written in Node.js using the "nodus-ponens" package. To run an experiment on your local machine, install Node.js and then follow these steps:</p> <p>Download and unzip the corresponding experiment code, e.g., "DR5-Code.zip".</p> <p>Use your command line interface to navigate to the corresponding directory where the code is stored, e.g.,</p> <p>$cd ~/Desktop/Code/</p> <p>Launch the experiment as follows:</p> <p>$ node main.js</p> <p>Point your browser to the "hostname" provided on the screen, e.g., "http://localhost:55152"</p> <hr> <p><strong><em>Where is the registration for Experiment 1 (DR1)?</em></strong></p> <p>It is available in the linked project entitled: "The consistency of durative relations".</p> <hr> <p><strong><em>What are the differences between the pre-registered analyses and those reported in the provided scripts?</em></strong></p> <p>DR1 & DR2</p> <ul> <li>Used the inverse of latency instead of log of latency to determine outliers.</li> </ul> <p>The log to inverse transformation change was to better capture fast outliers that were unlikely to reflect considered reasoning. </p> <p>Whelan, R. (2008). Effective analysis of reaction time data. <em>The Psychological Record</em>, 475–482.</p> <p>DR5</p> <ul> <li>We pre-registered 2 GLMMs - one for estimates of the main effects and interaction and one for an estimate of the predicted simple effect. Instead, we ran the main effects GLMM and used the new R package "emmeans" to get the simple effect estimate out of the one model.</li> </ul> <p>The GLMM change was motivated by the superiority of taking all estimates from the same underlying model. The 2 models was an easy way to get estimates for all of the effects but would introduce a bit of extra variability from the separate model computations. Emmeans also automatically adjusts the significance tests for multiple comparisons.</p>