Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This resource contains the experimental code, the data, and the data-analysis code relating to the numerosity-estimation experiments described in the paper "Bayesian estimation yields anti-Weber variability" (2024). Please cite this paper if you reuse this dataset or the experimental code. This wiki describes the files and their content. # Experiments `experiment_priors.html` and `experiment_stakes.html` contain the jsPsych code for the priors experiment and the stakes experiment. # Data ## Tasks data `data_priors.csv` and `data_stakes.csv` contain the data for the priors experiment and the stakes experiment. These files can be easily read using Pandas, e.g.: data_stakes = pd.read_csv('data_stakes.csv', index_col=0) Each row corresponds to a different trial. Here is a description of each column: | Column | Description | | ----------- | ------- | | sid | Subject's id. | |condition | Experimental condition (e.g., smaller_is_higherstakes or larger_is_higherstakes in the stakes experiment).| | phase | Whether the trial belongs to the "learning" phase, the "feedback" phase, or the "no-feedback" phase (described in the paper's Methods)| | nb_dots | Correct number of dots in this trial.| | response | Response provided by the subject. | | rt | Response time. | | points | Points earned in this trial. | | total_score | Points earned by the subject up to this trial | | dots | Coordinates of each dot. | | trajectory | Successive numbers selected on the slider by the subject, before submitting. In addition, "mdn" indicates when the subject pressed their mouse's button, and "mup" indicates when they released it.| | time_elapsed | Time elapsed since the beginning of the experiment. | | trial_index | jsPsych trial index.| ## Meta data `data_metadata_priors.csv` and `data_metadata_stakes.csv` contain information about the subjects, including their age, gender, laterality, operating systen, browser, etc. # Data analysis `numerosity_library.py` contains important functions used to compute the probability distribution of the response $\hat x$ conditional on a correct number $x$, given a prior, and/or a stakes function, and the parameters $\nu$ and $\sigma$ governing the noise in the internal representation and the motor noise. See documentation within. `data analysis.ipynb` is a notebook containing all the analyses presented in the paper, and which generates the paper's figures. See details within. `bms.py` contains the code pertaining to the Bayesian Model Selection analysis. Apart from a very minor modification, this code is Ham Huang's [Python adaptation][2] of Sam Gershman's matlab code. `modelfitting_results_priors.csv` and `modelfitting_results_stakes.csv` contain the results of the model fitting: for each subject and each model, the values of the best-fitting parameters, along with the negative of the log-likelihood, and the BIC. The folder `stan` contains the results of the statistical model estimation. Its content is as follows: - `data_p.json` and `data_s.json` contain the data in a format convenient for use with Stan, for the priors experiment and the stakes experiment, respectively. - `hmodel.stan` contains the Stan code specifying the statistical model. - `inits_p.json` and `inits_s.json` contain the Stan chains' initial values, for the priors experiment and the stakes experiment, respectively. - `results_priors.csv` and `results_stakes.csv` contain the Stan's "summaries" for each element of the statistical models (mean, standard deviation, etc.). - `run_stan.py` contains the Python code that was run to launch the parameter estimation by Stan. The folder `figures` contains the figures of the paper in PDF format, as produced by the notebook `data analysis.ipynb`. ## Software requirements Data analysis was conducted using Python 3.10.3, with the libraries - NumPy 2.1.0 - Scipy 1.14.1 - MatPlotLib 3.9.2 - Pandas 2.2.2 - CmdStanPy 1.2.4, along with Stan 2.35.0. The code included in this repository should take virtually no time to install on a regular desktop computer. Running the data analysis on a small dataset should also be very quick to run. [2]: https://github.com/HuangHam/bms
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.