Main content

Warning: This OSF project is public, but the figshare dataset 14721282 is private. Users can view the contents of this private figshare dataset.

Date created: | Last Updated:


Creating DOI. Please wait...

Create DOI

Category: Project

Description: The neural basis of object recognition and semantic knowledge have been the focus of a large body of research but given the high dimensionality of object space, it is challenging to develop an overarching theory on how brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. Traditional image databases are based on manually selected object concepts and often single images per concept. In contrast, ‘big data’ stimulus sets typically consist of images that can vary significantly in quality and may be biased in content. To address this issue, recent work developed THINGS: a large stimulus set of 1,854 object concepts and 26,107 associated images ( In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to all concepts and 22,248 images in the THINGS stimulus set. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain. This repository contains the code that was used to perform the analyses described in this paper: Grootswagers, T., Zhou, I., Robinson, A.K. et al. Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams. Sci Data 9, 3 (2022). - THINGS images and concept descriptions obtained from: (see also: - The raw data, preprocessed data, and grand-average RDMs are publicly available on Openneuro: - RDMs for single subjects are publicly available on figshare: (note: OSF sometimes incorrectly lists this as private) see the README in the code folder for instructions on how to reproduce the figures in the paper.

License: CC-By Attribution 4.0 International


Loading files...


Recent Activity

Loading logs...