Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Link to OpenNeuro dataset: https://openneuro.org/datasets/ds004467 fMRI replicability reporting based on Poldrack et al., 2008 https://www.sciencedirect.com/science/article/pii/S1053811907011020?via%3Dihub **Experimental design** - **Design specification:** - SS-BlockedLang - 12 blocks per run (4 runs) - 4 conditions: forward dialogue, backward dialogue, forward monologue, backward monologue - 20s blocks - SS-IntDialog - interleaved forward and backward speech - 3 videos per run (2 runs) - videos ranged from 1-3 minutes - Auditory language localizer - Source: https://evlab.mit.edu/funcloc/ (Scott et al, 2017) - Theory of mind localizer - Source: http://saxelab.mit.edu/use-our-efficient-false-belief-localizer (Dodell-Feder et al., 2011) **Task Specification:** - SS-BlockedLang + SS-IntDialog - participants instructed to pay attention to videos, button press attention check between blocks - videos were modified versions of clips from Sesame Street, with speech either played normally or reversed - see /stimuli/ folder for additional details **Planned comparisons:** - forward>backward - [forward dialogue>forward monologue]>[backward dialogue>backward monologue] **Human Subjects** - **Details on subject sample:** - 20 adult participants - age range 18-30 years - inclusion criteria: right handed, fluent in English - 13 female; 7 male **Ethics approval:** - This study was approved by the MIT Committee on the Use of Humans as Experimental Subjects **Behavioral performance:** - in-task performance measured by button press attention check in SS-BlockedLang and SS-IntDialog **Data acquisition** - **Image properties—as acquired:** - 3-Tesla Siemens Magnetom Prisma scanner - 32-channel head coil - T1-weighted structural images: 176 interleaved sagittal slices with 1.0mm isotropic voxels (MPRAGE; TA=5:53; TR=2530.0ms; FOV=256mm; GRAPPA parallel imaging, acceleration factor of 2) - Functional data: gradient-echo EPI sequence sensitive to Blood Oxygenation Level Dependent (BOLD) contrast in 3mm isotropic voxels in 46 interleaved near-axial slices covering the whole brain (EPI factor=70; TR=2s; TE=30.0ms; flip angle=90 degrees; FOV=210mm) - SS-BlockedLang: 185 vol/run - SS-IntDialog: 262 vol/run **Data preprocessing** - **Preprocessing:** - used fMRIPrep 1.2.6, based on Nipype 1.1.7 - see /metadata/ folder for preprocessing details **Registration:** - see /metadata/ for full fMRIprep specifications and registration details - BOLD data resampled to MNI152NLin2009cAsym standard space **Smoothing:** - 6mm smoothing kernel **Statistical modeling** - **Intrasubject fMRI modeling info:** - general linear model (GLM) using FSL in MNI space - used a lab-specific script that uses Nipype to combine tools from several different software packages for first-level modeling - event regressors: boxcar convolved with a standard double-gamma HRF and a - high-pass filter (1/128 Hz) applied to both the data and the model - artifact detection was performed using Nipype’s RapidART toolbox - individual TRs were marked as outliers if (1) there was more than .4 units of frame displacement, or (2) the average signal intensity of that volume was more than 3 standard deviations away from the mean average signal intensity. - 1 regressor per outlier volume - 1 summary movement regressor (framewise displacement) - 6 anatomical CompCor regressors - FSL’s fixed effects flow used to combine runs at the level of individual participants. - subject level model was created for each set of usable runs per contrast for each task - runs with more than 20% of timepoints marked as outliers were excluded from analysis **Group modeling info:** - group modeling used in-lab scripts that implemented FSL’s RANDOMISE to perform a nonparametric one-sample t-test of the con values against 0 (5000 permutations, MNI space, threshold alpha = .05), accounting for familywise error (TFCE) **Statistical inference** - - for statistical tests, used a significance level of alpha=.05, Bonferroni corrected for number of ROIs within a network **Inference on statistic image (thresholding):** - for group RFX analyses, used a threshold of p<.001 TFCE corrected - for individual maps, used a threshold of p<.001 **ROI analysis:** - defined subject-specific functional regions of interest (ss-fROIs) for language as the top 100 voxels activated in an individual, within each of 6 predefined language search spaces (and their right-hemisphere mirror regions), for the Intact>Degraded contrast using the auditory language localizer task (Fedorenko et al., 2010) - also defined ss-fROIs for ToM as top 100 voxels activated in an individual within each of 7 predefined ToM search spaces for the False Belief>False Photo contrast using the ToM localizer task (Dodell-Feder et al., 2011) **ISC analysis:** - task: SS-IntDialog - ISC analyses were performed using in-lab scripts modeled after the tutorials in https://naturalistic-data.org/ - preprocessed data was smoothed with a 6mm kernel, and then denoised using a GLM (6 realignment parameters, their squares, their derivatives, and squared derivatives), with outliers excluded using a dummy code, and average CSF activity and linear and quadratic trends regressed out. The timecourse was z-transformed to be centered at 0 - using a leave-one-subject out approach, we calculated the correlation between the held-out subject’s timecourse (i.e. the average response of that subject across all 100 voxels in that ROI) and (1) the average timecourse of the remaining participants who watched the same version of the stimuli, and (2) the average timecourse of the participants who watched the opposite version of the stimuli, within ss-fROIs
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.