Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Stimuli: The stimuli used in this project are from the [**ScenObjects database**][1] (https://osf.io/ur9y5/). The selection includes congruent and incongruent scenes and objects combinations. Object visibility is parametrically manipulated with a scrambled parameter (seven versions of each object). ---------- Project: Converging data from neurophysiological recordings in primates (Bullier, 2001; De Valois et al., 1982; Van Essen & Deyoe, 1995) and psychophysical studies in human (Hughes et al., 1996; Parker, Lishman, & Hughes, 1996; Schyns & Oliva, 1994) indicate that the visual system extract visual information through a set of channels/filters differently tuned to specific orientations and spatial frequency bands of the visual input. Based on these data, current models of visual perception have emphasized the role of spatial frequency information in visual perception (Bar, 2003; Hegdé, 2008; Kauffmann et al., 2014; Peyrin et al., 2010). According to current models of visual perception, visual analysis begins with the parallel extraction of spatial frequencies bands that provide different information about the visual scene. Lower spatial frequencies (LSF) provide coarse information, such as the global shape and structure of a visual scene, and are predominantly conveyed through fast magnocellular channels. Higher spatial frequencies (HSF) provide finer information about the scene, such as edges or object details, and are conveyed more slowly through parvocellular channels. On the basis of the neurophysiological properties of the magno- and parvocellular pathways (Bullier, 2001; Maunsell et al., 1999) and results of psychophysical studies in humans (Hughes et al., 1996; Kauffmann et al., 2015; Musel et al., 2012; Schyns & Oliva, 1994), it has been suggested that visual scene analysis follows a predominantly ‘Coarse-to-Fine’ processing sequence. LSF information would be extracted first, allowing a coarse parsing of the visual input, prior to the analysis of fine information contained in HSF. It was also hypothesized that rapid LSF information may guide the subsequent processing of HSF (Bar, 2003; Kauffmann et al., 2014; Peyrin et al., 2010; Trapp & Bar, 2015). For Example, the proactive model of visual recognition (Bar, 2003, 2007) postulates that LSF would be rapidly conveyed to frontal areas (in particular in the orbitofrontal cortex), in which predictions about the nature of the context and objects in the visual scene would be generated. Results of this low-pass primary analysis would be back-projected by rapid feedbacks to visual areas of the occipital and inferotemporal cortices in order to guide the processing of feedforward HSF information in the ventral visual stream. ![enter image description here][2] Importantly, the distribution of retinal photoreceptors and retinal ganglion cells is non homogeneous throughout the retina (Curcio and Allen, 1990; Curcio et al., 1990). The density of cones and midget ganglion cells from which the parvocellular pathway originates, and which are used to process HSF information, is greatest in the fovea, while the density of rods and parasol ganglion cells from which the magnocellular pathway originates, and which are used to process LSF information, increases with foveal eccentricity. Therefore, the spatial resolution of visual information across the visual field is not uniform but constrained by the anatomical and functional properties of the retina. This anatomic and functional characteristic of the visual system should have a strong implication for models of visual recognition, which are mainly based on the processing of spatial frequencies. Yet, these models are based on results from behavioral or neuroimaging studies mainly using small size stimuli presented in the central visual field (5-10 central degrees of visual angle). **With this project, we propose to develop models of visual recognition based on spatial-frequency processing by considering the differences of spatial frequency processing between central and peripheral vision. We hypothesize that the fast processing of LSF, extracted mainly in peripheral vision, would allow predictions about the visual stimulus, that would then be used to guide the subsequent processing of HSF, extracted in central vision.** [1]: https://osf.io/ur9y5/ "ScenObjects database" [2]: http://www.frontiersin.org/files/Articles/87047/fnint-08-00037-HTML/image_m/fnint-08-00037-g001.jpg
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.