Main content



Loading wiki pages...

Wiki Version:
# Methods for analyzing large neuroimaging datasets # Volume co-editors: Hervé Lemaître PhD (Institut des Maladies Neurodegeneratives, Bordeaux, France) & Robert Whelan PhD (Trinity College Dublin, Dublin, Ireland) ## Scope of the volume ## Human brain imaging is in a period of profound change. There is growing recognition that sample size must drastically increase to achieve adequate statistical power and reproducibility. Accordingly, several large neuroimaging databases have been established recently. For example, Adolescent Brain and Cognitive Development (ABCD: is a 10-year, $300 million, neuroimaging project that will recruit 10,000 people (currently mid-way through). Importantly, ABCD data will be fully open access, available with minimal restrictions. Similar open-access databases include Alzheimer’s Disease neuroimaging initiative (ADNI: and Open Access Series of Imaging Studies (OASIS;, sharing platforms (e.g., or belonging to large consortia (IMAGEN: The UK Biobank ( will collect neuroimaging data from 100,000 people (currently, 50,000 have been scanned), and these data are available to researchers for a very model access fee. These datasets will open up neuroimaging to a new generation of scientists. The proposed book will guide both new and experienced researchers through the latest methods in neuroimaging analysis. We propose to organize the volume into three main sections (see below). ## Relation to existing research and literature ## Despite the recent proliferation of large, easily available, neuroimaging datasets there are no books, to our knowledge, that are oriented towards the specialized methods required to analyze them. There are several books on the basics of MRI design and analysis (see list below), but these focus on methods appropriate to analysis of a relatively small number of subjects that a single laboratory would obtain. For example, Ombao et al. (2016) treat the fundamentals of magnetic resonance imaging and electroencephalography: MRI physics, experimental design, preprocessing and statistical analysis. ## Originality ## The primary methodological considerations for large neuroimaging datasets revolve around 1) deploying scalable methods to process large volumes of data and 2) use of appropriate statistics or advanced methods such as machine learning or deep learning to uncover between group or individual differences. There is currently no book that describes computer/software requirements, automatic processing pipelines, statistics for large, multi-site studies (traditional statistics are not appropriate for very large datasets), or newer methods such as deep learning (which require very large datasets to be tractable). With respect to very large neuroimaging datasets, the data have either already been collected or will be collected according to a consensus protocol. Therefore, we do not include any chapters on neuroimaging acquisition, in contrast to existing books.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.