Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Test-retest reliability of myelin imaging in the human spinal cord: measurement errors versus region- and aging-induced variations** =============================================================================== General description ------------------- This repository includes a sub-sample of the dataset and analysis scripts that were used in the article: Lévy et al., Test-retest reliability of myelin imaging in the human spinal cord: measurement errors versus region- and aging-induced variations, PLOS ONE (under review). Due to IRB restrictions, it was not possible to share the whole dataset publicly. However, we obtained specific consent to share raw datasets of 4 young volunteers, including 3 who were scanned twice. Those data (directory "data") are organized by subject IDs. The second scan session is indicated by "_retest". Within a subject directory, data are organized by imaging modality (anatomic T2-weighted image, B1 transmit field estimation, Magnetisation Transfer Ratio, Magnetization Transfer saturation, Macromolecular Tissue Volume). Along with those data we provide the batch scripts to compute the myelin-sensitive metric maps from the raw data and to register them to the spinal cord template and white matter (WM) atlas. We also provide the code: - to extract the metrics values by vertebral levels and WM regions: folder=code_to_assess_reliability/extract_myelin_sensitive_metric_values - to compute the statistical indexes for reliability assessment as proposed in the manuscript: folder=code_to_assess_reliability/compute_stats_reliability_indices - to produce the figures of the paper: folder=code_to_assess_reliability/compute_stats_reliability_indices, files=plot_*.py - A Microsoft Excel spreadsheet gathering all estimated qMRI metrics values is also provided. Batch scripts to process raw data --------------------------------- *Dependences* The batch scripts to process raw data mainly use the functions of the Spinal Cord Toolbox (https://sourceforge.net/projects/spinalcordtoolbox/) version 2.2.3. In addition, the following functions available at https://bitbucket.org/neuropoly/mtv are used: - `mtv_compute_b1_scaling.py` - `mtv_smooth_b1.py` - `mtv_compute_M0_T1_from_SPGR_ss_eq.m` - `mtv_correct_receive_profile.m` - `mtsat_compute.m` *Organisation of the processing pipeline* The root folder ("data") includes the script "batch_process_all_subjects.sh". This script enters every subject's folder and runs the batch script specific to this subject. Each subject's folder includes its own batch script called "d_sp_pain_<ID>_process_all.sh". This script then runs each batch processing script of every imaging modality (in the proper order). - `d_sp_pain_<ID>_t2_processing.sh`: processes anatomic T2-weighted image, - `d_sp_pain_<ID>_b1_processing.sh`: computes B1 transmit field map, - `d_sp_pain_<ID>_mtr_processing.sh`: computes MTR map, - `d_sp_pain_<ID>_mtsat_processing.sh`: computes MTsat map, - `d_sp_pain_<ID>_mtv_processing.sh`: compute MTV and T1 maps estimating a B1 receive field map, - `d_sp_pain_<ID>_register_template.sh`: register MNI-Poly-AMU template and WM atlas to each of these maps. Code to compute reliability indexes ----------------------------------- The folder `code_to_assess_reliability` consists of two folders: - "extract_myelin_sensitive_metric_values": this folder includes Matlab scripts running the SCT function "sct_extract_metric" to estimate the myelin-sensitive metric values within different ROIs and storing the results in matrices under a .mat file named "metric_values.mat": - `extract_test_retest_values_by_cord_regions.m`: metric values in all vertebral levels (C2, C3, C4, C5) by WM sub-regions (dorsal column, lateral funiculi, ventral funiculi) for tested and retested subjects only (n=16), - `extract_test_retest_values_by_vertebral_levels.m`: metric values in whole WM by vertebral levels (C2, C3, C4, C5) for tested and retested subjects only (n=16), - `extract_test_retest_values_within_wholeWM.m`: metric values in whole WM within all vertebral levels (C2, C3, C4, C5) for tested and retested subjects only (n=16), - `extract_values_all_subjects_by_cord_regions.m`: metric values in all vertebral levels (C2, C3, C4, C5) by WM sub-regions (dorsal column, lateral funiculi, ventral funiculi) for all subjects (n=33), - `extract_values_all_subjects_by_vertebral_levels.m`: metric values in whole WM within all vertebral levels (C2, C3, C4, C5) for all subjects (n=33), - `extract_values_all_subjects_within_wholeWM.m`: metric values in whole WM within by vertebral levels (C2, C3, C4, C5) for all subjects (n=33). - "`compute_stats_reliability_indices`": this folder includes: - Matlab scripts to compute test-retest reliability indexes in the 3 different configurations listed above based on the data loaded from the file “metric_values.mat” (tested and retested subjects only, n=16), - Python scripts (`plot_repeatability_by_cord_regions.py`, `plot_repeatability_by_vertebral_levels.py`, `plot_sensitivity_vs_cord_regions.py`, `plot_sensitivity_vs_vertebral_levels.py`) to produce the figures of the paper. The calculation of some measurement errors indexes, such as the 95% confidence interval for the test-retest difference (CId), is also performed within those scripts. Microsoft Excel spreadsheet collecting metrics values ----------------------------------------------------- The Microsoft Excel spreadsheet named "metric_values.xlsx" collects all results of the metric estimations within each region of interest for every scan session and every volunteer of the cohort. The 1st tab of the sheet corresponds to the tested and retested cohort only (n=16), and the 2nd tab corresponds to the whole cohort (n=33).
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.