Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This dataset provides the data presented in our paper "Auditory figure-ground analysis in rostral belt and parabelt of the macaque monkey" [1] A critical aspect of auditory scene analysis is the ability to extract a sound of relevance (figure) from a background of competing sounds (ground) such as when we hear a speaker in a cafe. This is formally known as auditory figure-ground segregation. This is colloquially known as "cocktail party problem". To understand how the brain segregates overlapping sounds, we need to record from neurons, i.e. single cells in the brain. Since systematic single cell brain recordings are not suitable to perform in humans, we need to use animals in this research. Monkeys are best suited as animal models of human auditory perception due to their similar auditory abilities and similar organization of their auditory brain as humans. However, before we generalize the findings from monkeys to humans, we need to establish that monkeys utilize similar brain regions as humans for auditory figure ground segregation. To compare the underlying brain network in humans and monkeys, I need to employ sounds that are equally relevant to both species. So these sounds can neither be human speech nor monkey calls. So we have created artificial sounds where an auditory object made of tones repeating in time and "background" overlap in time and frequency. So extraction of this auditory object requires integration across both time and frequency similar to extraction of a voice in a noisy party. Thus, these artificial sounds simulate the challenges faced in real-world listening yet are devoid of semantic confounds. The stochastic figure ground stimulus examines fundamental mechanisms for figure ground perception that are equally relevant to the rhesus macaque, in which we can carry out both system level and systematic neuronal specification of the system. We investigated the neural bases of pre-attentive stimulus-driven auditory segregation in rhesus macaques. We employed non-invasive functional magnetic resonance imaging (fMRI) and presented stochastic figure ground (SFG) artificial sounds to awake passively listening rhesus macaques (macaca mulatta) that were trained to perform visual fixation for fluid reward. EPI images were acquired using a sparse acquisition protocol on 4.7T upright Bruker scanner (TR/TA/TE = 10s/2.011s/21ms) while the two animals performed a stimulus irrelevant visual fixation task. 360 volumes (135 each for figure & control) were acquired per session per animal. Analysis was carried out using SPM software (SPM12). Single subject inference was carried out by applying a generalized linear model (GLM). We observed significant activation in anterior superior temporal gyrus of the two monkeys. We showed that monkeys use similar regions of their auditory cortex (rostral belt and parabelt) as humans to separate overlapping sounds. This has now paved the way for recording from single cells in the monkey brain which will enable us to understand how the brain solves the cocktail party problem. If you use this data please cite the following paper: [1] Felix Schneider*, Pradeep Dheerendra*, Fabien Balezeau, Michael Ortiz-Rios, Yukiko Kikuchi, Christopher I. Petkov, Alexander Thiele, and Timothy D. Griffiths. "Auditory figure-ground analysis in rostral belt and parabelt of the macaque monkey." Scientific reports 8, no. 1 (2018): 1-8. https://doi.org/10.1038/s41598-018-36903-1