Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This dataset provides the data presented in our paper "Neuronal figure-ground responses in primate primary auditory cortex". A critical aspect of auditory scene analysis is the ability to extract a sound of relevance (figure) from a background of competing sounds (ground) such as when we hear a speaker in a cafe. This is formally known as auditory figure-ground segregation. This is colloquially known as "cocktail party problem". To understand how the brain segregates overlapping sounds, we need to record from neurons, i.e. single cells in the brain. Since systematic single cell brain recordings are not suitable to perform in humans, we need to use animals in this research. Monkeys are best suited as animal models of human auditory perception due to their similar auditory abilities and similar organization of their auditory brain as humans. However, before we generalize the findings from monkeys to humans, we need to establish that monkeys utilize similar brain regions as humans for auditory figure ground segregation. To compare the underlying brain network in humans and monkeys, we need to employ sounds that are equally relevant to both species. So these sounds can neither be human speech nor monkey calls. So we have created artificial sounds that contain (i) an auditory object made of tones repeating in time and (2) "background" masker elements which overlap in time and frequency with the 'object'. Extraction of this auditory object requires integration across both time and frequency similar to extraction of a voice in a noisy party. Thus, these artificial sounds simulate the challenges faced in real-world listening yet are devoid of semantic confounds. Here, we investigated auditory figure-ground segregation based on neuronal multi-unit activity of rhesus macaques that attentively listened to SFG stimuli. The experiment was designed as Go/No-go figure-detection task. The detection of auditory figures was indicated with a touch bar release. The target-to-masker ratio (figure coherence) was pseudorandomly changed from trial to trial (either 8 or 12 elements, equal probability). We presented auditory figures in 60% of trials. The remaining trials contained no temporally coherent elements (catch trials). We report figure-ground modulation of neuronal multi-unit activity across the auditory cortex, including the primary auditory cortex (A1).
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.