# How to use the tools described in the paper
---
@[toc](Contents)
---
This page describes the tools provided in this repository in detail. For a simple example using a subject from our dataset, please see the [example page](../Example).
If you are interested in using our Bayesian inference method on your own subjects or applying the retinotopy prior to them, we have attempted to make this as easy as possible. There are two ways to do both of these tasks. If you are able to install and run [Docker](https://www.docker.com), then we suggest that you use the [Dockerized version of the Neuropythy library](https://hub.docker.com/r/nben/neuropythy). Alternately, you can install and use the python library directly. Both of these approaches are documented below.
## Basic Setup
### FreeSurfer
In order to use either the retinotopic prior or Bayesian inference, it is necessary to process your subject's anatomical (T1-weighted) MR image(s) using [FreeSurfer's](https://surfer.nmr.mgh.harvard.edu/) `recon-all` command. We will assume that your subject's FreeSurfer directory is `/fs_subjects/sub001`, but you should replace this path with the paths you use. The subjects in the project were processed using FreeSurfer version 5.3.0; we recommend versions 5.1.0, 5.3.0, or a higher version number.
### Installing Docker
If you plan to use the Docker-image, then you will have to install [Docker](https://www.docker.com). This is generally not too difficult, but there are occasionally issues using Docker over remote filesystems or on a computer where you do not have root access. In these cases, it may be easier to use the Python library directly.
Once you have installed docker, you have two options for obtaining the Neuropythy docker-image: you may download it from [docker-hub](https://hub.docker.com/) or you may load it from the docker-image included in this OSF repository. Below are examples that use the former method, which requires no additional preparation (Docker will automatically obtain the docker-image). If you wish to use the provided docker image, you will need to download it, unzip it (so it should be named `neuropythy-docker-image.tar`), then use the `docker load` command. See [this page](https://docs.docker.com/engine/reference/commandline/load/) for information on using `docker load`, or see the [example page](../Example) for an example of how to use it.
Note that if Docker seems to fail without an error message (or with just a message that says 'Killed'), then one likely issue is that your Docker configuration does not allow the docker-images to use very much memory; increasing the memory limit may fix this. (On Mac, this can be done by clicking on the whale icon in the upper-right corner of the desktop and selecting preferences.)
### Installing the Python library
In order to use the Neuropythy python library directly, you will have to complete a few configuration steps:
1. Install [Python](https://www.python.org/), version 2.7 (in theory Neuropythy is compatible with Python 3.5, but this has never been tested).
2. Install [Java](https://java.com/en/download/help/download_options.xml), version 1.8 or greater. Java is required for the Bayesian inference.
3. Install the Neuropythy library. This may be done in two ways:
1. Use `pip`: You can install Neuropythy using the command `pip install neuropythy`.
2. Download the [repository from github](https://github.com/noahbenson/neuropythy): once you have done this, you will need to make sure that the root of the repository is on your `PYTHONPATH` *or* you will need to run `python setup.py install` from the repository root. If you download the repository from github, note that you will need to look at the requirements.txt file and make sure that all of the libraries listed there are installed and work together.
## Applying the Retinotopic Prior to a subject
If you have performed the setup described above, then applyting the retinotopic prior to a subject is simple. Both the docker-image and the python library will produce the same outputs, all of which are placed in your subejct's FreeSurfer `surf/` directory. The files are named as follows:
* *lh.benson14_angle.mgz*, *rh.benson14_angle.mgz*: polar angle overlays for each hemisphere; angle is always in degrees of rotation about the visual field between -180 and 180° where 0° is the upper vertical meridian, -90° is the left horizontal meridian, and 90° is the right horizontal meridian.
* *lh.benson14_eccen.mgz*, *rh.benson14_eccen.mgz*: eccentricity overlays for each hemisphere; eccentricity is always in degrees of the visual field.
* *lh.benson14_varea.mgz*, *rh.benson14_varea.mgz*: visual area label overlays for each hemisphere; vertices with a value of 0 are not in any ROI; otherwise, the ROIs are documented [below](../Usage/#Visual_Area_Labels_151).
* *lh.benson14_sigma.mgz*, *rh.benson14_sigma.mgz*: pRF size estimates; these are in degrees of the visual field.
In addition, volume files that are named identically but without the 'lh.' or 'rh.' prefix can be written out to the subject's FreeSurfer `mri/` directory.
### Using the Docker
To use the Neuropythy docker-image, simply issue the following command:
```bash
docker run --rm -it -v /fs_subjects:/subjects \
nben/neuropythy:latest benson14_retinotopy \
--verbose --surf-format=mgz sub001
```
Note that you will need to replace `/fs_subjects` with your subjects directory and `sub001` with your subject id. Additionally, you may replace all the arguments after `benson14_retinotopy` in the above command with `--help` to see additional options.
The above command runs the latest version of the Neuropythy docker-image; to run the version used in this project, you may replace the `nben/neuropythy:latest` with `nben/neuropythy:v0.5.0` or use the docker-image provided in this repository.
### Using the Python library
Using the Python library is nearly identical to using the docker except that the call is directly to python:
```bash
python -m neuropythy benson14_retinotopy --verbose --surf-format=mgz sub001
```
Note that the above will not work if your `SUBJECTS_DIR` environment variable is not set correctly; alternately you may provide a path to the subject's directory instead of just a subject id. You will need to replace `sub001` with your subject id. Additionally, you may replace all the arguments after `benson14_retinotopy` in the above command with `--help` to see additional options.
## Performing Bayesian Inference on a subject
In both usage cases (Neuropythy Docker or Neuropythy Python Library), you will need to prepare your data to be compatible with the Neuropythy library's expectations. The following steps should be followed to ensure this compatibility:
1. **Solve the pRF models for your subjects retinotopy scan(s).** As of when this paper was published, we suggest using either [VistaSoft](https://github.com/vistalab/vistasoft/wiki) (retinotopy tutorial [here](https://github.com/vistalab/vistasoft/wiki/Ernie-Tutorials)) or [analyzePRF](http://kendrickkay.net/analyzePRF/), as these tools yield all the necessary parameters for our tools. See [this page](https://noahbenson.github.io/Retinotopy-Tutorial/) for an informal tutorial on how retinotopy was processed for use in this project. The specific parameters needed are:
* Polar Angle: rotation around the visual field
* Eccentricity: visual field distance from the fovea
* Sigma/pRF Size: the σ (standard deviation) parameter of the Gaussian pRF
* Variance Explained: the fraction of the BOLD-signal variance explained by the pRF model
2. **Project the pRF parameters onto the subject's cortical surface vertices.** Note that if you solved for these quantities on the surface (vertices) rather than in the volume (voxels), this step is not necessary. How to project data to the cortical surface is beyond the scope of this wiki, but [this tutorial](https://noahbenson.github.io/MRI-Geometry/) explains how such a transformation can be done with example code (specifically [this section](https://noahbenson.github.io/MRI-Geometry/#interp-vol)), and [this tutorial](https://noahbenson.github.io/Retinotopy-Tutorial/) ([this section](https://noahbenson.github.io/Retinotopy-Tutorial/#postproc-pRFs)) explains the process we used to create the surface files in this project.
3. **Make sure your retinotopy parameter data are in the correct format.** In terms of file-formats, it is okay to store your surface data in a FreeSurfer 'curv' format file (also called a 'morph data' file in Python's [nibabel](http://nipy.org/nibabel/) library) or either an MGH/MGZ or a NifTI volume file with one non-unitary dimension. More importantly, your retinotopy parameters must be represented in the following ways:
* **Polar angle must be in degrees of rotation clockwise starting from the upper vertical meridian**. This means that the upper vertical meridian is 0°, the right horizontal meridian is 90°, the left horizontal meridian is -90°, and the lower vertical meridian is ±180°.
* **Eccentricity must be in degrees of visual angle from the fovea.**
* **Sigma/pRF size must be degrees of the visual field.**
* **Variance explained must be a fraction between *v* such that 0 ≤ *v* ≤ 1.**
The following two sections explain how to execute the Bayesian inference engine. In both sections we assume the following:
* Your subject's FreeSurfer ID is *`sub001`* and your FreeSurfer `SUBJECTS_DIR` is *`/fs_subjects`* (so your subject's directory is *`/fs_subjects/sub001`*).
* You have written out the following surface-data files according to the guidelines above. Note that we assume here you are using MGZ files, but it shouldn't be an issue if you use a different compatible format.
* *`/fs_subjects/sub001/prfs/lh.meas_angle.mgz`*
* *`/fs_subjects/sub001/prfs/lh.meas_eccen.mgz`*
* *`/fs_subjects/sub001/prfs/lh.meas_sigma.mgz`*
* *`/fs_subjects/sub001/prfs/lh.meas_vexpl.mgz`*
* *`/fs_subjects/sub001/prfs/rh.meas_angle.mgz`*
* *`/fs_subjects/sub001/prfs/rh.meas_eccen.mgz`*
* *`/fs_subjects/sub001/prfs/rh.meas_sigma.mgz`*
* *`/fs_subjects/sub001/prfs/rh.meas_vexpl.mgz`*
* You expect the inferred retinotopy parameters to be written out to the directory *`/fs_subjects/sub001/prfs`*.
Please make sure to adjust these values in the following sections to match your own configuration.
### Using Docker
To use the Dockerized version of Neuropythy for Bayesian inference, run the following command at a command line:
```bash
docker run \
-it --rm -v /fs_subjects:/subjects \
nben/neuropythy:latest \
register_retinotopy sub001 \
--verbose \
--surf-outdir=/subjects/sub001/prfs \
--surf-format="mgz" \ # may also be 'curv'
--vol-outdir=/subjects/sub001/prfs \
--vol-format="mgz" \ # may also be 'nifti'
--lh-angle=/subjects/sub001/prfs/lh.meas_angle.mgz \
--lh-eccen=/subjects/sub001/prfs/lh.meas_eccen.mgz \
--lh-radius=/subjects/sub001/prfs/lh.meas_sigma.mgz \
--lh-weight=/subjects/sub001/prfs/lh.meas_vexpl.mgz \
--rh-angle=/subjects/sub001/prfs/rh.meas_angle.mgz \
--rh-eccen=/subjects/sub001/prfs/rh.meas_eccen.mgz \
--rh-radius=/subjects/sub001/prfs/rh.meas_sigma.mgz \
--rh-weight=/subjects/sub001/prfs/rh.meas_vexpl.mgz
```
### Using Python
Using the Neuropythy library directly via Python is another way to use our tools. This way is recommended if you are planning to use Neuropythy for detailed analysis or if you can't install or use Docker for some reason. Generally, this path is more difficult because you will have to make sure that all of the relevant libraries and dependencies work together.
Once you have installed the Neuropythy library, you may run the following command at a command line:
```bash
python -m neuropythy \
register_retinotopy sub001 \
--verbose \
--surf-outdir=/fs_subjects/sub001/prfs \
--surf-format="mgz" \ # may also be 'curv'
--vol-outdir=/fs_subjects/sub001/prfs \
--vol-format="mgz" \ # may also be 'nifti'
--lh-angle=/fs_subjects/sub001/prfs/lh.meas_angle.mgz \
--lh-eccen=/fs_subjects/sub001/prfs/lh.meas_eccen.mgz \
--lh-radius=/fs_subjects/sub001/prfs/lh.meas_sigma.mgz \
--lh-weight=/fs_subjects/sub001/prfs/lh.meas_vexpl.mgz \
--rh-angle=/fs_subjects/sub001/prfs/rh.meas_angle.mgz \
--rh-eccen=/fs_subjects/sub001/prfs/rh.meas_eccen.mgz \
--rh-radius=/fs_subjects/sub001/prfs/rh.meas_sigma.mgz \
--rh-weight=/fs_subjects/sub001/prfs/rh.meas_vexpl.mgz
```
### Outputs and Notes
#### Generated Files ####
Both of the above methods will produce a set of inferred files (with the same basic data conventions as the input files). In addition, a pair of files (`lh.retinotopy.sphere.reg` and `rh.retinotopy.sphere.reg`) which are the subject's cortical surfaces registered to the model of retinotopy. The only other output files that require additional explanation are the `varea` files (e.g., `lh.inferred_varea.mgz`). These files contain the visual area labels for each vertex. Any vertex whose label is 0 is not predicted to be part of any visual area, and any inferred parameter for this vertex in the other inferred files should be ignored. For a vertex whose label is not 0, the labels are as below.
#### Visual Area Labels ####
The labels contained in the various 'varea' files found in the project's `analyses/` directory are as follows.
| Label Value | Visual Area Name |
| ---: | --- |
| 0 | No visual area |
| 1 | V1 |
| 2 | V2 |
| 3 | V3 |
| 4 | hV4 |
| 5 | VO1 |
| 6 | VO2 |
| 7 | LO1 |
| 8 | LO2 |
| 9 | TO1 |
| 10 | TO2 |
| 11 | V3b |
| 12 | V3a |
**Warning**: As of time of this publication, we have not validated the error or accuracy of areas beyond V1-V3. We include them as a demonstration and guide for future researchers, but we do not endorse the predictions as accurate or valid. Use these predictions at your own risk.
#### Additional Options ####
Note that other options may be passed to the above `register_retinotopy` commands; to see these, replace all text after the `register_retinotopy` in either command with `--help` (for example, `python -m neuropythy register_retinotopy --help`). In particular, the option `--max-input-eccen=<x>` ignores any vertex whose measured eccentricity is greater than `<x>`, and `--weight-min=<y>` specifies that vertices with a variance explained fraction less than `<y>` should be ignored; you may need to adjust these values from their defaults (no max eccentricity and 0.1 minimum variance explained) depending on your retinotopy experiments. We generally suggest limiting the eccentricity maximum to the radius of the stimulus presented in the retinotopy experiment.
#### Retinotopy with unknown pRF Sizes
If you have older retinotopy data or if your experiment / analyses don't allow you to deduce a pRF size (σ) parameter, you can create and use dummy files whose contents are all 1s instead. You will also need to pass the `register_retinotopy` command an option `--radius-weight=0`. The sigma parameter is used to improve estimates of which area matches each vertex, but its overall effect is generally small, so it can be ignored without major issue.
#### Retinotopy without a measure of confidence ####
You *can* run this tool without a measurement of confidence (the variance explained in the details above). However, this is more disruptive than excluding the pRF size and may result in very strange inferences. Simply set all vertices to have a variance explained of 1 if this is the case. Probably, the inference for V1 will still be fine, but we make no promises.