Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# Requirements - All the analysis code and reproducible workflow are written in R programming language (version 3.6.1). - To install the R packages used in the code, first make sure that the prerequisites for installing rstan and brms are met; see: https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started. Then, run the R file `install_packages.R` # How to produce the figures and tables in the paper? - First, download the whole directory in your system - Make sure you have the R package "rmarkdown" installed in your system and you are able to generate the PDF from an .Rmd file - Knit the reproducible workflow document: `Workflow_interference_modeling.Rmd` (Runtime: 2 minutes) - The workflow generates all the figures and tables used in the paper (You may try this: empty the ./Plots folder and then run the workflow; it will re-fill the ./Plots folder with the figures generated by the Rmd file) - The PDF generated after knitting the workflow, `Workflow_interference_modeling.pdf`, presents a consolidated summary of the step-by-step analysis (modeling) done in the paper # How to reproduce the estimates of number agreement effects (used for plotting Figure 1 in the paper)? - Run the R file: `./Reproduce-observed-effects/Estimate_number_agreement_effects.R` (Runtime: 2 hours, Requirements: 68 cores, Linux) - If you do not have sufficient computational resource, run the R file `./Reproduce_observed_effects/Estimate_number_agreement_effects_inseries.R` (Runtime: 34 hours, Requirements: 4 cores, Linux) # How to reproduce the prediction samples that were used for plotting prior predictions for each model? Run the following R files (all in `./Reproduce-prediction-samples` folder): - `./Reproduce-prediction-samples/Retrieval-only.R` (Runtime: 1 hour, Requirement: 1 core) - `./Reproduce-prediction-samples/Direct_access_non_linear_cues_model.R` (Runtime: 1 hour, Requirement: 1 core) - `./Reproduce-prediction-samples/Feature-percolation.R` (Runtime: 20 minutes, Requirements: 4 cores, Linux) - `./Reproduce-prediction-samples/Reading_bias_model.R` (Runtime: 1 hour, Requirement: 4 cores, Linux) - `./Reproduce-prediction-samples/Lossy-compression.R` (Runtime: 2 hours, Requirements: 20 cores, Linux) - `./Reproduce-prediction-samples/FPR.R` (Runtime: 1 hour, Requirements: 4 cores, Linux) - `./Reproduce-prediction-samples/LCR.R` (Runtime: 2 hours, Requirements: 20 cores, Linux) # How to reproduce the posterior samples that were used for computing the predictive performance of each model? Run the following R files (all in `./Reproduce-posterior-samples` folder): - `./Reproduce-posterior-samples/Cue-based-retrieval.R` (Runtime: 2 hours, Requirements: 34 cores, Linux) - `./Reproduce-posterior-samples/Non-linear-cue-based-retrieval.R` (Runtime: 2 hours, Requirements: 34 cores, Linux) - `./Reproduce-posterior-samples/Feature-percolation.R` (Runtime: 3 hours, Requirements: 34 cores, Linux) - `./Reproduce-posterior-samples/Reading-bias-model.R` (Runtime: 1 hour, Requirements: 34 cores, Linux) - `./Reproduce-posterior-samples/Lossy-compression.R` (Runtime: 140 hours, Requirements: 34 cores, Linux) - `./Reproduce-posterior-samples/FPR.R` (Runtime: 14 hours, Requirements: 34 cores, Linux) - `./Reproduce-posterior-samples/LCR.R` (Runtime: 140 hours, Requirements: 34 cores, Linux) - `./Reproduce-posterior-samples/Feature-percolation-with-markedness.R` (Runtime: 3 hours, Requirements: 34 cores, Linux) - `./Reproduce-posterior-samples/FPR-with-markedness.R` (Runtime: 14 hours, Requirements: 34 cores, Linux) Note: The above runtimes are given for estimating model parameters under all seven priors on distortion rate.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.