Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
--- Main Wiki for COLA RR --- Files are saved in the project [CANDICE COLA: The brain basis of inconsistent language lateralisation][1]. # [Online Data Component][2] ## Data log The date-stamped log for original data can be found in the component Online data: [log Gorilla participants sess1.xlsx][3] and for the subset who did the retest online session: [log Gorilla participants sess3.xlsx][4] ## Gorilla online behavioural data The raw output from Gorilla is also in the Online data component, in a huge csv file called [osf_dat.csv][5] with the session 3 (retest data) in [osf_dat_session3.csv][6] These .csv files contain details of stimuli and responses for every screen and every participant. They are enormous and take many minutes to load. They can be read more quickly (though still not quickly!) into R by loading [bothsess_dat.rds][7]. For more general details on organisation of gorilla files see [Emma James' Gorilla Tutorial][8]. A data dictionary for the online data can be found [here][9]. Most of the columns in the raw data file are concerned with specifying information that determines how Gorilla presents the task to the participant. Many Gorilla options are irrelevant and hence columns are NA. Columns that are used in analysis are highlighted. The script [gorilla_raw_processing.Rmd][10] takes this file as input and creates a manageable summary file with just the information we need (see below). # [FTCD Data Component][11] This contains raw and processed fTCD data files. The raw, anonymised fTCD data files are stored in .exp format, and zipped together into the [ftcd_raw_data.zip][12] file. Within each file, each row represents a time sample. The data were acquired at 100 Hz. Depending on the site where the data was acquired, there are either 7, 9 or 18 columns of data. The columns used in the analysis are: - Time (when the sample was acquired)* - Sample (the sequential numbering of the samples) - Gate 1 To Probe Env. (the envelope value for the left probe) - Gate 2 To Probe Env. (the envelope value for the right probe) - Trigger / Analog 1 / PortB_1 (the value of the 'trigger' channel, indicating when each trial began). Data acquired at Bangor, Lancaster, Oxford or UWA use the 9 column format, and the columns used for the analysis are 1, 2, 3, 4 and 9. Data acquired at Lincoln use the 7 column format, and the columns used for the analysis are 1, 2, 3, 4, 7. Data acquired at UCL use the 18 column format, and the columns used for the analysis are 1, 2, 3, 4, 11. *The computer date was wrongly set for data collected at Bangor, so need to add 10 years 4 months 19 days to the date of recording on the raw .exp files. (The date recorded in ftcd_data is correct). # [Processed Data Component][13] This component contains summary data as follows: - [allsum.csv][14]: this is summary data on demographics and the online behavioural tasks - [ftcd_data.csv][15]: summary data from ftcd - [combined_data.csv][16]: combines allsum and ftcd_data, with subjects aligned. - [sess1.csv][17]: same as allsum.csv but session 1 data, just for those who also did session 3 - [sess3.csv][18]: same as allsum.csv but session 3 only, IDs aligned with sess1.csv Data dictionaries are provided for [allsum][19] and [ftcd_data][20]. The column names are the same in combined_data.csv and so the same data dictionary can be used. Note that for ftcd_data and combined_data there there are separate versions, depending on whether the LIs were computed using the original, preregistered baseline (-10 to 0 s), or the revised baseline (-5 to 2 s). These files are compared in Supplementary materials 8. # [R Scripts Component][21] ## [Data Preprocessing][22] ### Processing Gorilla raw data The raw Gorilla data is crunched by [gorilla_raw_processing.Rmd][23]. The tasks and questionnaires processed within this script are: Questionnaires (1) Demographics (2) Edinburgh Handedness Inventory (3) Porta Test of Ocular Dominance (4) LexTale (5) Grammar Quiz (Games with Words) Behavioural language tasks (1) Dichotic Listening (DL) (2) Rhyme Decision (RD) (3) Word Comprehension (WC) Additional tasks from Bangor team (not included in Registered Report) (1) Colour Scales (CS) (2) Chimeric Faces (CF) N.B. anyone wishing to analyse data from these two tasks should contact david.carey@bangor.ac.uk or Emma.Karlsson@UGent.be to request permission. As well as [bothsess_dat.rds][24] this script requires these additional files when computing demographics: - [coded_demog.csv][25] - [grammarScoring.csv][26] - [grammarScoringshort.csv][27] It is recommended that the working directory be set to source file location, and these additional files are then stored in a folder in that directory called __COLAbits__. The script computes summary data and LI values, which are saved as __allsum.csv__ (see below) N.B. The session 3 (retest) data are processed together with session 1 data, after creating a subject code that denotes the session by adding '1' or '3' at the start of the ID. After processing, __allsum3.csv__ (with just session 3 data) is saved separately. Two optional chunks at the end of the script are used to bolt the ftcd data on to the allsum file. ### Processing FTCD raw data The script [ftcd_preprocessing.R][28] generates a summary data file called __ftcd_data.csv__ with laterality indices for the 6 tasks computed from the raw data. To run this script, the script needs to be able to access the __ftcd_data.csv__ file, so please download this file and update the filepaths on lines 15, 17 and 21. The script will use the trial inclusion/exclusion information as listed in __ftcd_data.csv__, and will calculate laterality statistics for all tasks and all participants. However, the task can also be modified to allow you to check each trial manually, or to analyse individual participants rather than the whole group. The results are saved in __ftcd_data.csv__. The script also creates plots of the average response for each task (which will be stored in the 'ftcd_LI_plots' directory) and a .csv file for each task with the timecourse of the averaged response (stored in the 'ftcd_task_means' directory). Note that, as explained in the text, we initially specified a baseline period of -10 to 0 seconds, but switched to -5 to 2 s because the longer baseline was not stable. The results are computed with the original baseline for comparison and can be found in [ftcd_data_origbaseline.csv][29]. ## [Main Analysis for RR][30] The R markdown script __COLA_RR_Analysis.Rmd__ reads in [combined_data.csv][31] and computes all results. In theory, this file can be knitted to create the manuscript, plus some of the supplements, but we have found it to be temperamental, possibly because of issues with version control of the packages _table1_ and _flextable_. Thus even when all chunks run OK, you may get an error message on knitting to Word. (We have solved this by commenting out the line in the __makedemog.table__ chunk that just prints ftab and adding the table to the word document manually. At time of writing this is line 1089). For __COLA_RR_Results.Rmd__ file access is controlled by the __here__ package. This ensures that script will read and write to folders that are recognised relative to the root file where __COLA_RR_Results. Rmd__ is located. With the script in top level of directory, you need to create three folders: __data__, __figs__, and __ftcd_task_means__. The __data__ folder should contain files created by __Gorilla.Processing.Script_forOSF.rmd__: __combined_data.csv__ __combined_data_origbaseline.csv__ __sess1.csv__ __sess3.csv__ __ftcd_data.csv__ __ftcd_data_origbaseline.csv__ and also: __task_details.csv__ - not really data, but stored here as it is read in by this script. Additional files will be created as script is run. Note __ddat_OE.csv__ is created so it can be read when creating Supplementary outputs. This has data separately for odd and even trials from fTCD. Folder __ftcd_task_means__ is a large folder of files with means from L and R channels for each task and participant - needed to create timecourse plot. Once it is created and saved, you could skip that chunk. Folder __figs__: folder to which figures will be written; also includes predrawn figures __WordCompDemo.pdf__ and __tasktimings.pdf__. This file should be saved in the working directory: - __mystyle.docx__. ## [Power Analysis][35] This was used with the Stage 1 submission. ## [Scripts for supplementary material][36] Supplementary material 6 and 7 are created at the end of the main R markdown script. The output of these are found [here][37]. ### [Supplement 8][38] This is a R markdown file that creates a long document containing details of SEM analyses, including those comparing the results with different subsets of participants. It reads in: - ddati.csv - ddati_origbaseline.csv - ddati_OE.csv These should all be found in the working directory after running __COLA_RR_Analysis.Rmd__. N.B. you need to run the script twice, once with useorigbaseline set to 0, and once with it set to 1. This can be set from the __readcombined__ chunk. # Materials The materials for the online tasks are available on [Gorilla][39]. The materials for the fTCD tasks are available [here][40]. [1]: https://osf.io/g9tqh/ [2]: https://osf.io/rkywv/ [3]: https://osf.io/t5gxa/ [4]: https://osf.io/t5gxa/ [5]: https://osf.io/ztaq7/ [6]: https://osf.io/gmhwb/ [7]: https://osf.io/5twkh/ [8]: https://emljames.github.io/GorillaR/GorillaR_Part1.html [9]: https://osf.io/3xydb/ [10]: https://osf.io/cv7d9/ [11]: https://osf.io/sfv6w/ [12]: https://osf.io/uxdm6/ [13]: https://osf.io/7hdc6/ [14]: https://osf.io/e9u5b/ [15]: https://osf.io/mkajq/ [16]: https://osf.io/46kzf/ [17]: https://osf.io/n3jsc/ [18]: https://osf.io/59b8w/ [19]: https://osf.io/esa5q/ [20]: https://osf.io/hy7ja/ [21]: https://osf.io/6zwye/ [22]: https://osf.io/25jmz/ [23]: https://osf.io/cv7d9/ [24]: https://osf.io/5twkh/ [25]: https://osf.io/8bt4m/ [26]: https://osf.io/xzukj/ [27]: https://osf.io/m64e5/ [28]: https://osf.io/2bt7r/ [29]: https://osf.io/64wjy/ [30]: https://osf.io/yc2r6/ [31]: https://osf.io/7hdc6/ [32]: https://osf.io/bspuz/ [33]: https://osf.io/xkrvy/ [34]: https://osf.io/jhs32/ [35]: https://osf.io/9dbrg/ [36]: https://osf.io/jv28b/ [37]: https://osf.io/rcysd/ [38]: https://osf.io/bpna7/ [39]: https://gorilla.sc/openmaterials/104636 [40]: https://osf.io/g3qms/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.