Home

Menu

Loading wiki pages...

View
Wiki Version:
<h1>Analysis steps</h1> <p>This project contains the materials to replicate the analyses in:</p> <p>Orme et al. (2019) Nature Ecology and Evolution XX: XX - XX<br> DOI: 10.1038/s41559-019-0889-z</p> <p>Note that the code in this document is not a functional master script - some of the steps are run using a cluster so there are file transfers and other steps that are needed but this document shows the order of the steps and the scripts used.</p> <h2>Missing data</h2> <p>A number of the datasets used in the study are freely available for research but require registration to access. We have not duplicated them here. The missing files are:</p> <p><strong><code>Bird_maps_2018/\BOTW\BOTW.gdb</code></strong>: See <a href="http://datazone.birdlife.org/" rel="nofollow">http://datazone.birdlife.org/</a>. Used to provide range limits for study species and the wider Atlantic Forest avifauna.</p> <p><strong>SOS Mata Atlântica forest cover maps</strong>: See <a href="https://www.sosma.org.br/projeto/atlas-da-mata-atlantica/" rel="nofollow">https://www.sosma.org.br/projeto/atlas-da-mata-atlantica/</a>. Used to provide broad scale maps of forest cover for extrapolation (2013-2014 assessment) and for calculation of forest cover when surveys were conducted (2010-2011 assessment).</p> <p><strong>Instituto Florestal forest cover maps</strong> See <a href="http://iflorestal.sp.gov.br/" rel="nofollow">http://iflorestal.sp.gov.br/</a>. Higher resolution forest cover maps used to calculate forest cover around some survey sites.</p> <h2>Preparing the study grid and calculating forest cover</h2> <p>These steps generate a grid of the proportion forest cover across the study area to be used for later modelling and to define the region of interest.</p> <ol> <li>Use Python to create a pickle file of the most recent SOS maps of the ACF fragments<pre class="highlight"><code>cd projection_python python3 <a href="http://extract_fragment_geometries.py" rel="nofollow">extract_fragment_geometries.py</a></code></pre> </li> </ol> <p>Note that this script expects to find the 2013-2014 SOS Mata Atlantica maps in the <code>GIS_layers</code> folder. This dataset requires permission for research use so is not provided here. The script expects to find the set of state by state shapefiles at <code>GIS_layers/Atlantic Forest cover maps/2013-2014/*.shp</code>.</p> <ol> <li> <p>Use Python to create a high resolution regular grid over the study area with ~ 9 million 1km cells across the region</p> <pre class="highlight"><code>python3 <a href="http://make_grid.py" rel="nofollow">make_grid.py</a></code></pre> </li> <li> <p>The outputs of those scripts are:</p> <ul> <li><code>ACF_fragments_pickle</code> (fragment geometries and a spatial index)</li> <li><code>grid.pickle</code> (shapely rectangular grid geometries)</li> <li><code>grid_centroids.pickle</code> (shapely Points of grid centroids)</li> <li><code>grid_centroids_XY.pickle</code> (coordinates of centroid points)</li> </ul> <p>We used a cluster to calculate forest cover on the fine grid from those outputs. The details will change depending on the cluster available but we used a cluster running PBS to batch the calculation. The command below submits an array job to the queue to split the cells into 40 subjobs and calculates the cover in parallel. This is just repeatedly calling the script <code><a href="http://calculate_cover_grid.py" rel="nofollow">calculate_cover_grid.py</a></code> using different starting offsets. It takes about 2.5 hours per subjob (so would be about 4 days of unparallelised runtime).</p> <pre class="highlight"><code>qsub <a href="http://calculate_cover_grid_pbs.sh" rel="nofollow">calculate_cover_grid_pbs.sh</a></code></pre> </li> <li> <p>That step populates a folder cover_grid with the 40 separate outputs. That folder needs to be recovered from the cluster and then stitched back together using:</p> <pre class="highlight"><code>python3 <a href="http://compile_cover_grid.py" rel="nofollow">compile_cover_grid.py</a></code></pre> <p>That script compiles a raster of proportion forest cover for modelling, some binary rasters of forest presence at 1km, 10km and 20km resolutions and vector representations of those binary rasters</p> </li> </ol> <h2>Identification of Atlantic Forest avifauna</h2> <p>These steps identify the set of species to be studied from the global avifauna maps. We have a set of species recorded from the field in the study sites, but also want to project the models for the whole avifauna, so we cut down the Birdlife Birds of the World maps to the species that intersect the forest cover vectors from the previous step.</p> <ol> <li> <p>A Python script that uses the 5km resolution forest cover polygons to subset the BOTW geodatabase</p> <pre class="highlight"><code>cd ../Bird_maps_2018 python3 <a href="http://subset_botw.py" rel="nofollow">subset_botw.py</a></code></pre> <p>Again, this step uses the Birds of the World distribution dataset, which requires registration for use from Birdlife International and so is not provided here. The script expects to find the geodatabase at: <code>Bird_maps_2018\BOTW\BOTW.gdb</code>.</p> </li> <li> <p>Compile (~ 1 hour runtime) the bird species ranges into a shapefile (<code>cleaned_ranges.shp</code>) that:</p> </li> <li> <p>only includes the Atlantic Forest birds we want to model (<code>AllSpecies2018.csv</code>),</p> </li> <li>has a single feature per species, and</li> <li> <p>passes validity checks.</p> <pre class="highlight"><code>R CMD BATCH --vanilla make_cleaned_ranges.R</code></pre> </li> <li> <p>Get the coastline buffer to be used to remove coastal range edges. This is based on GSHHS but needs some minor modifications to remove clearly coastal features from the ranges that occur inland according to GSHHS. The folder here includes two excerpts from the GSHHS: the full dataset is large and updated (<a href="https://www.ngdc.noaa.gov/mgg/shorelines/" rel="nofollow">https://www.ngdc.noaa.gov/mgg/shorelines/</a>) so we have saved the specific subset used in this study here along with shapefiles containing specific modifications to the coastline required for this study.</p> <pre class="highlight"><code>cd ../Coastline R CMD BATCH --vanilla create_buffered_coastline.R</code></pre> <p>The script creates <code>new_world_continental_coastline.shp</code> and <code>new_world_continental_coastline_buffered.shp</code>.</p> </li> <li> <p>Combine (~ 15 minute runtime) the cleaned ranges and the coastline buffer to extract multipolylines showing the continental range extents of each species. This validates the species matching against the occupancy data.</p> <pre class="highlight"><code>cd ../Bird_maps_2018 R CMD BATCH --vanilla extract_continental_margins.R</code></pre> </li> </ol> <h2>Calculation of forest cover at sample sites</h2> <p>Calculate the forest cover for the sample sites using appropriate high resolution data. This requires a set of high resolution GIS data found in the GIS_layers folder and creates the file <code>forest_cover_hi_res.csv</code></p> <pre class="highlight"><code> cd ../forest_cover_data R CMD BATCH --vanilla forest_cover_calculation_hi_res.R</code></pre> <p>A second script is also provided that calculates forest area, proportion forest cover and buffer overlap between sites for differing buffer distances. These files are used to explore the influence of buffer distance on model parameters in the statistical analysis below. </p> <p>Note that both script expect to find the 2010-2011 SOS Mata Atlantica maps in the <code>GIS_layers</code> folder and the Instituto Florestal 2005 forest cover map, both of which will require the data to be requested as noted above.</p> <h2>Compilation of modelling data</h2> <p>Compile the model data - species presence/absence, forest cover and distance to range edge for study sites and study species and create a datetime stamped file <code>model_data_YYYYMMDD.csv</code> in the root directory.</p> <pre class="highlight"><code> cd ../Bird_maps_2018 R CMD BATCH --vanilla compile_model_data.R</code></pre> <h2>Statistical analysis</h2> <h3>Cross validation</h3> <p>The following steps run the cross validation on the main GLMER model, looking at leave-one-out cross validation for each study and single site models. The first script fits all the models and, because it takes a reasonably long time to run (~ 5.5 hours), it saves the resulting models in <code>cross_validation_models.Rdata</code>.</p> <pre class="highlight"><code>cd Statistical_analysis/cross_validation R CMD BATCH --vanilla fit_cross_validation_models.R</code></pre> <p>The next script loads the models, checks for convergence and compiles a data frame of cross validation scores in <code>cross_validation_scores.csv</code> for use in generating Figure S2.</p> <pre class="highlight"><code>R CMD BATCH --vanilla get_cross_validation_results.R</code></pre> <h3>Buffer radius</h3> <p>The paper includes analysis of the impact of choice of buffer radius on the models. These steps perform this analysis. This is a lengthy analysis as it involves fitting the main GLMER repeatedly with different choices of buffer distance and the recalculation of forest cover for those buffers. We again used a cluster running PBS to run sets of models in parallel. To replicate this, you will need to setup the required files and folder structure on your own cluster.</p> <p>First, a batch array job is submitted to the cluster that runs <code>buffer_radius.R</code> for blocks of 5 models.</p> <pre class="highlight"><code>qsub <a href="http://buffer_radius_pbs.sh" rel="nofollow">buffer_radius_pbs.sh</a></code></pre> <p>That creates the <code>output_models</code> directory and populates it with the models with different buffer distances. Once those have been retrieved from the cluster, the following command compiles the contents of those models and generates plots.</p> <pre class="highlight"><code>R CMD BATCH --vanilla buffer_model_compiler.R</code></pre> <h3>Main statistical analysis</h3> <p>The main statistical analyses presented in the paper are described in <code>Statistical_analysis/Script_Orme_etal.r</code>. This creates the <code>.Rdata</code> files and the <code>glmer_params.csv</code> file in that directory. </p> <h3>Extrapolate model to Atlantic Forest Avifauna</h3> <p>These steps project the model onto all of the terrestrial ACF species. We use a fine grid to get distances from range edge across all species ranges and then calculate probability of occurrence under the model parameters.</p> <ol> <li> <p>Calculate maps of species distance to range edge across the Atlantic Coastal Forest to model probabilities of occurrence. This step is set up to run on a cluster to reduce processing time which is considerable: ~ 18 hours * 40 blocks of ~ 20 species for basically a month of data crunching. The max block run time (genera with big ranges) is getting close to 24 hours. The PBS file <code><a href="http://calculate_distances_pbs.sh" rel="nofollow">calculate_distances_pbs.sh</a></code> runs the python file <code><a href="http://calculate_distances.py" rel="nofollow">calculate_distances.py</a></code>. You will need to look in this file to find the required files and directory structure to set it up on a cluster.</p> <pre class="highlight"><code>qsub ./<a href="http://calculate_distances_pbs.sh" rel="nofollow">calculate_distances_pbs.sh</a></code></pre> <p>This creates a directory <code>distances</code> containing a set of gzipped files showing distance to range edge for the study region for each species. These are very large (~30GB in total) and are not duplicated here.</p> </li> <li> <p>Generate map surfaces: the following script extracts the distances to edge from the previous step and combines them with the model parameters to generate a species richness map, a predicted sum of incidence map and some other raster data sets.</p> <pre class="highlight"><code>python3 <a href="http://compile_occupancy.py" rel="nofollow">compile_occupancy.py</a></code></pre> </li> </ol> <h2>Figures</h2> <p>The Statistical analysis folder contains R code to generate the figures from the model outputs and calculated data.</p>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.