Loading wiki pages...

Wiki Version:
## Contents of this repository ### In the **behavioural_data** folder, you will find: - Data from the post-experiment comprehension questionnaire, and R Markdown used to inspect the data and ensure that participants performed above chance ### In the **post_exp_questionnaires** folder: - English translation of post-experiment **comprehension questionnaire**, probing understanding of the contents in the dialogue. This can be found in the document *comprehension_questionnaire.pdf* - English translation of questionnaire on **participant's experience** of the dialogue, asking for feedback on clarity and comprehensibility of the two voices (sound levels, pronunciation, etc.). This can be found in the document *feedback_questionnaire.pdf* ### In the **stimuli_text** folder: - Full text of the dialogue in **Danish** of the dialogue. Each newline correspond to a turn transition from one character to the other. - Full translation of the dialogue into **English**, same logic with newlines ### In the **stimuli_audio** folder: - Audio files in Danish (i.e. those used for the experiment) with stimuli for each group. Files coded with **1** vs. **2** differ in which subset of breaks contains tones. Files coded with **a** vs. **b** differ in the side of presentation of each voice. - A ~5 minutes audio sample from the **English** translation, giving a feel for what the left / right manipulation sounds like. Please note that this is just to help the reader. English stimuli were **NOT** used in the experiment. ### In the **scripts** folder: - Script that automatically generates audio recordings from the text, using two synthesized voices, called **record_quicktime.py**. It only works on Mac (as it interacts with QuickTime and sound settings via AppleScript, and uses mac TTS voices). Details on requirements are provided at the beginning of the script. - Script used for stimulus delivery, called **play_audio_file.py**. The script uses Psychopy2 as library. Handier to run from Spyder or similar, as it requires a bit of interaction with the console to input participant info. After having provided this info, the script opens a dummy psychopy window. Press on the icon and press 't' to start. - Script to check number of button presses during the experiment, **check_button_presses.py**, making sure that participants react to tones. The script reads the button presses logs from the folder **button_press_data** and prints on console the number of button presses recorded for each participant, next to the participant number. ### In the **break_indices** folder: - File with indices of turns in which breaks occur. It is used by the **record_quicktime.py**. Pure tones were then added manually using Audacity. ### In the **button_press_data** folder: - All log files recording button presses by each participant, which track their response to tones occurring in the text. These files are read in by the script **check_button_presses.py** to count responses. ### In the **correlation_analysis** folder: ##### - A tsv file called *whole_brain.txt*, which includes correlation values for each pair of experimentally manipulated words, for each subject and at each time point (in long format). This dataset includes the following columns: - **roi_name**, coding for the name of the region of interest (here only "whole brain"); - **subj_col**, an ID number (1 to 28) coding for the subject number; - **beta_col**, coding for time point after stimulus onset (from 1 to 20, i.e. from 0.5s to 10s after stimulus onset at 0.5s intervals); - **var1** and **var2**, the two words which are correlated. For example, if values refer to correlation between "here" and "where", then "here" and "where" will be var1 and var2 (order is not important); - **cors**: Pearson's correlation value. - **euc** and **mean_dist** represent euclidean distance and mean distance, not used in this analysis. ##### - A tsv file called *ROIData.txt*, with same structure as the previous file, including similarity values by region of interest; ##### - An RMarkdown called *Correlations.Rmd*, performing the analyses reported in the paper based on the correlation values in the two above mentioned files. The script is entirely reproducible if relevant libraries are installed and if the structure of the present repository is kept (*ROIData.txt* and *whole_brain.txt* should be placed in the same folder as the script). ### - **Scripts**, **first-level GLMs** and **second-level results** shared on GitHub: https://github.com/rbroc/demonstrativesfMRI ### Note To run **record_quicktime.py** and **play_audio.py** keep the same file structure as in this repository. All Python scripts use Python2
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.