Main content



Loading wiki pages...

Wiki Version:
# PGAM on High Performance Computing: Efficient estimation of neural tuning during naturalistic behavior ---------------- Here we provide a singulaity image that allows to easily deploy the PGAM based estimation of neural tuning fuctions in HPC settings. The image is the singularity counterpart of the dokcer image version 1.0. # Table of contents ---------- * **Resources** * **Running the ".sif" image in a singularity container** * **SLURM job script for parallelized PGAM fits** # Resources The image facilitates setting up the PGAM library, available on github at; We refer to the paper,, for additional details on the theory and implementation of the PGAM parameter estimation. We refer to the jupyter notebook for a practical introduction to the model implementation, and the parameter selection. For NYU greene users, see for a tutorial on how to work with singularity container and miniconda, where to find overlays and "sif" images. # Running the ".sif" image in a singularity container In this session we will show how to run a script that imports the **GAM_library** through a singularity container. Setting up the container requires uploading the **pgam_1.0.sif** image into your HPC folders, and an singularity overlay file system. For NYU greene HPC users, overlays can be copied and unzipped from the folder **/scratch/work/public/overlay-fs-ext3**, while the PGAM image is readily available at **/scratch/work/public/singularity/pgam_1.0.sif**. Let's suppose that we have the following script named ** which simply imports the pgam libary, ```python import GAM_library as gml print('\n################################') print('Succesfully imported GAM_library!\n\n') ``` In order you can run the code via the singularity container with the following, ```sh singularity exec --overlay \ <path to overlay>:ro \ <path to sif>/pgam_1.0.sif \ /bin/bash -c "python <path to script>/" ``` ## SLURM job script for parallelized PGAM fits Let's suppose that we have a script ** that takes as an impput a job id, and fits the corresponding neuron. Below an example of SLURM job script, **run-job.sbatch**, that runs 100 fits, *run-job.sbatch*: ```sh #!/bin/bash #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=1 #SBATCH --time=1:00:00 #SBATCH --mem=8GB #SBATCH --job-name=fit-pgam #SBATCH --array=1-100 module purge singularity exec --overlay \ <path to overlay>:ro \ <path to sif>/pgam_1.0.sif \ /bin/bash -c "python <path to script>/ $IID" ``` The PGAM repo provides a standardized pipeline for fitting PGAMs which is described in the tutorial The script could replace ** in **run-job.sbatch**, provided that the configuration files and inputs are structured appropriately.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.