Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# Visual Performance Fields This wiki and OSF page document the Visual Performance Fields project by Noah C. Benson, Eline Kupers, Antoine Barbot, Marisa Carrasco, and Jonathon Winawer. The wiki serves primarily to document the dataset on which the analyses in the project were based and the notebook/code on this OSF page that perform the analyses and make the plots in the accompanying paper. ## The Dataset The dataset provided via the [files](https://osf.io/5gprz/files/) on this OSF page is easiest to interact with and explore using the [neuropythy](https://github.com/noahbenson/neuropythy) Python library. If you are interested in querying the dataset, you are strongly encouraged to use the docker-image or neuropythy—instructions for this are in the section on code below. This section documents the files provided on the OSF page, in case you want to download them manually. The OSF files on this page include two CSV files and three directories. Each of them are documented here. * **`DROI_table.csv`** The DROI (distance-based region of interest) table contains data about each of the wedge-like ROIs defined in the visual field. These data include the surface areas of each ROI for each subject as well as other anatomical measures. * **`summary_table.csv`** The summary table contains a summary of all the property data of all the subjects. Each vertex of each subject in the V1 and V2 ROIs is included and documented in this table. * **`DROIs/`** The DROIs directorry contains one CSV file for each subject containing information about that subject's DROIs. These data get accumulated to make the `DROI_table.csv` file. * **`distances/`** This diectory contains one file per subject and per hemisphere (362 files total). Each hemisphere has an MGZ file in this directory containing calculations of the distances along the midgray surface from the ventral V1 boundary, from the dorsal V1 boundary, and from the horizontal meridian to each surface vertex. * **`inferred_maps/`** This directory contains the retinotopic map parameters as deduced using Bayesin inference ([Benson and Winawer, 2018](https://doi.org/10.7554/eLife.40224)). There are 8 MGZ files for every subject in this directory: two sets of files per hemisphere, each of which includes four property files for polar angle, eccentricity, prf-size/sigma, and visual-area label. These MGZ files encode surface-properties like those in the `distances/` directory. In this case, each file contains one property, so one value per vertex. ## Code, Notebook, and Docker-Image The code for this project is provided in the [github repository](https://github.com/noahbenson/performance_fields) linked to this OSF page. The repository contains a notebook ([`notebooks/performance-fields.ipynb`](https://github.com/noahbenson/performance-fields/blob/master/notebooks/performance-fields.ipynb)) and a single Python source-code file containing functions to both regenerate the dataset from its source components and to load and organize the dataset into a coherent set of Python data structures. The dataset itself, as defined in the code file just mentioned, is included automatically in the [neuropythy](https://github.com/noahbenson/neuropythy) library, and can be accessed by the following code. ```python >>> import neuropythy as ny >>> data = ny.data['visual_performance_fields'] ``` This dataset contains a number of carefully-organized values and tables that are documented here. The dataset itself will be automatically loaded from this OSF page as data from it is requested. If you have [configured neuropythy](https://github.com/noahbenson/neuropythy/wiki/Configuration) correctly, this will be saved in a local cache and thus should load slowly the first time you access it but quickly in subsequent uses, even across Python sessions. Note that to analyze the data in this dataset, you must have a set of credentials from the Human Connectome Project. Obtaining these is documented in the README/splash-page of the project's [github page](https://github.com/noahbenson/performance-fields/). ### The Python Dataset As shown above, the Python dataset can be accessed via neuropythy: `data = ny.data['visual_performance_fields']`. The members of this dataset object are as follows: * `data.subject_list` is a tuple of the HCP subject IDs for the subjects included in this dataset. * `data.inferred_maps` is a nested-dictionary structure containing the retinotopic maps inferred by using Bayesian inference on the retinotopic maps of the subjects in the [HCP 7T Retinotopy Dataset](). ```python # Get the polar angles for the vertices of subject 111312's LH >>> data.inferred_maps[111312]['lh']['inf_polar_angle'] array([131.7419, 132.95761, 122.13758, ..., 0., 0., 0.], dtype=float32) ``` * `data.boundary_distances` is a nested-dictionary structure containing distances between each vertex and a V1 boundary. If x is `boundar_distances[sid][h][b][k]` then x is the distance between the `k`'th vertex and boundary `b` (`"ventral"`, `"dorsal"`, or `"horizontal"`) in the h hemisphere (`"lh"` or `"rh"`) of the subject with ID `sid`. ```python # Get the distances in mm from the ventral V1 boundary to each # vertex of subject 111312's LH >>> data.boundary_distances[111312]['lh']['ventral'] array([26.940, 26.534, 26.588, ..., 155.056 , 155.405, 155.800], dtype=float32) ``` * `data.summary_table` is a dataframe that summarizes all of the subject property data employed in the project for the calculation of the various wedge-like distance-based ROIs. The table includes data for every vertex in the V1/V2 ROIs (between 0 and 7 degrees of eccentricity) of both hemispheres of all subjects. * `data.subjects` is a dictionary of subject objects for all subjects used in the visual performance fields dataset. All subject objects in the `subjects` dictionary include property data on the native hemispheres for inferred retinotopic maps and for V1 boundary distances. ```python # Get subject 111312's LH >>> hem = data.subjects[111312].lh # Get the pRF polar angle (from the HCP 7T Ret. dataset) >>> hem.prop('prf_polar_angle') array([165.419, 165.419, 149.389, ..., 133.690, 157.5, 157.5], dtype=float32) # Get the distance to the V1 horizontal meridian >>> hem.prop('horizontal_distance') array([23.883, 23.477, 22.700, ..., 159.411, 159.761, 160.156], dtype=float32) ``` * `data.DROI_details` is a nested-dictionary structure of the various DROI (distance-based ROI) details of each subject and hemisphere. These details include the vertices in each of the distance-based wedge-like ROIs, the surface area of each such vertex, the thickness, and the gray-matter volume. * `data.DROI_tables` (distance-based regions of interest tables) is a dictionary of data about the ROIS used in the visual performance field project. `DROI_tables[sid]` is a pandas dataframe of the ROI-data for the subject with ID `sid`. * `data.DROI_table` (distance-based ROI table) is a dataframe summarizing all the data from all the hemispheres and all the distance-based wedge ROIs used in the visual performance fields project. * `data.DROI_summary` (distance-based ROI summary) is a nested-dictionary data structure that provides easily-plottable summaries of the DROI_table. A value `DROI_summary[b][k][u][s]` corresponds to the boundary `b` (`'ventral'`, `'dorsal'`, `'vertical'`, or `'horizontal'`), the property key `k` (`'surface_area_mm2'`, `'mean_thickness_mm'`, or `'volume_mm3'`), angle bin `u` (where `u = 0, 1, 2, 3, 4` indicates 0-10, 10-20, 20-30, 30-40, 40-50 degrees away from the relevant boundary), and subject `s`. All data are collapsed across eccentricities from 1-6 degrees. * `data.asymmetry` is a nested dictionary structure containing the surface-area asymmetry estimates for each subject. The value `asymmetry[k][a][sno]` is the percent asymmetry between the axes defined by comparison name `k` (`'HMA'` for HM:VM asymmetry, `'VMA'` for LVM:UVM asymmetry, `'HVA_cumulative'` for cumulative HM:VM asymmetry, or `'VMA_cumulative'` for cumulative LVM:UVM asymmetry), subject number `sno` (0-180 for the HCP subject whose ID is `subject_list[sno]`), and angle-distance `a` (10, 20 30, 40, or 50 indicating the angle-wedge size in degrees of polar angle). Asymmetry is defined as `(value1 - value2) / mean(value1, value2)` where `value1` and `value2` are either the horizontal and vertical ROI surface areas respectively or the lower-vetical (dorsal) and upper-vertical (ventral) ROI surface areas respectively. The values reported in this data structure are percent asymmetry: `difference / mean * 100`. ### The Docker Image The docker-image associated with this project is documented in the README/splash-page for the project's [github page](https://github.com/noahbenson/performance-fields/). Follow these instructions there to start the docker-image and run the notebook. If you do not have Python installed or are not using a linux/mac OS, the Docker is likely the easiest way to examine the data directly without having to install a number of programs and libraries.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.