Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Data from White, Palmer & Boynton, *Psychological Science* (in press)**: “Evidence of serial processing in visual word recognition” by Alex White, November 2017 contact: alexlw@uw.edu **Individual subject data files:** These are stored in two zipped folders, one for each experiment: Expt1_IndivData and Expt2_IndivData. For each participant, there is a text file (e.g., S1AllDat.txt) and a Matlab file (e.g., S1AllDat.mat). These contain data from each trial of the experiment. There were a total of 12 unique subjects who participated in the two experiments. S1 through S8 participated in both experiments. S9 and S10 only did Experiment 1, and S11 and S12 only did Experiment 2. In each text file, there are many columns. Each column contains a particular variable of interest, such as whether a target was present and which button the participant pressed. In each Matlab file, there is a single variable called allDat, which is a structure containing many vectors, each corresponding to a column in the text file. In each experiment’s folder there is a PDF (e.g. Experiment1Key.pdf) that explains what each variable is. Each row in the text file (and each element of the vectors in the Matlab structure) is for one response. Each single-task trial has only one row, because participants made only one response. But each dual-task trial has two consecutive rows, because participants made two responses on those trials. That means that most of the variables for the pairs of rows corresponding to one dual-task trial are the same. The variables that differ across tow two rows for one trial include those that relate to the subject’s response: chosenRes, respCorrect, reportedPresence, reportedRating. These variables also differ across responses within one trial: respOrder, targSide, targPres, distPres. “targSide” is the side that the subject had to respond to for that response (1=left, 2=right), and for each response “targPres” is whether a target was present on that side, and “distPres” is whether a target was present on the other side. Note that due to a programming bug, the response times were not recorded for both dual-task responses. For each trial, there are 2 tRes values recorded, but they are both the time of the 2nd response. The first tRes was overwritten by the second. Regardless, this experiment was not designed to measure response times. The subjects emphasized accuracy, and always had to wait for a beep (700 ms after the post-cue onset) to respond. Also note that for some subjects, the stimulus difficulty levels (RSVP rate, word-mask ISI, or color increment magnitude) were not set correctly, resulting in accuracy that was too high or too low. As noted in the Supplementary Materials, we excluded and re-ran any set of 12 blocks if both single- and dual-task proportion correct were below 0.7 or above 0.9. **For the sake of transparency, those trials with bad difficulty levels are still included** in the individual .txt and .mat files. They are flagged, however, with the field “excludeBlock_BadDifficulty.” To precisely reproduce our results, those **trials for which excludeBlock_BadDifficulty equals 1 should be excluded from the analysis.** **Results Files** Also included here is a zipped folder called "ResultsFiles". This contains the results of the analysis described in the ["Analysis" component][1] of this repository. The subfolder "indiv" contains mat files with individual subjects' results for both experiments. The subfolder "group" contains mat files with the group results, which can be read in by the scripts in the Analysis component to reproduce figures and write out the text files with statistics. You could completely re-create the ResultsFiles folder on your own by downloading the individual data files descrbied above, and the analysis code, and running the scripts provided. [1]: https://osf.io/frvdh/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.