Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
## Data, Maps, and Scene Images for *[When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual attention][1]* **Citation:** Rehrig, G., Hayes, T.R., Henderson, J.M., Ferreira, F. (2020). When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual attention. *Memory & Cognition, 48*, 1181–1195. https://doi.org/10.3758/s13421-020-01050-4 You can listen to a recording of the manuscript read aloud by the first author on the podcast *[Manuscripted][2]*. # Contents Experiment 1 - Data - Experiment1_Scene-Level_Analysis.txt Data corresponding to the Experiment 1 scene-level analyses in the manuscript. Each row in the tab-delimited file corresponds to one scene in one condition (*articulatory suppression* or *control*). Variables are defined as follows: - **scene:** The name of the scene image (file extension omitted). - **CCR2_Salience:** Correlation coefficient (*R*^2) for the correlation between the saliency map and attention map for the current scene. - **SPCCR2_Salience:** Semipartial correlation coefficient (*R*^2) for the partial correlation between the saliency map and attention map for the current scene, controlling for the shared variance explained by the meaning map. - **CCR2_Meaning:** Correlation coefficient (*R*^2) for the correlation between the meaning map and attention map for the current scene. - **SPCCR2_Meaning:** Semipartial correlation coefficient (*R*^2) for the partial correlation between the meaning map and attention map for the current scene, controlling for the shared variance explained by the saliency map. - **SalienceMeaning_CCR2:** Correlation coefficient (*R*^2) for the correlation between the meaning map and saliency map for the current scene. - **SalienceMeaning_CCR2_NP:** Correlation coefficient (*R*^2) for the correlation between the meaning map and saliency map for the current scene, excluding the scene's periphery (which is identical across maps due to the peripheral downweighting present in both maps). - **Condition:** Indicates whether the attention map was derived from fixations recorded in the Suppression or Control condition. - Experiment1_Fixation_Analysis.txt Data corresponding to the Experiment 1 scene-level analysis in the manuscript. Each row in the tab-delimited file corresponds to one fixation in scene in one condition (*articulatory suppression* or *control*). Variables are the same as in the scene-level analysis, except that the correlations between meaning and saliency maps are omitted, and there is an additional variable **Fixation** that indicates the current fixation step. - Experiment1_Recognition_Test.txt Data corresponding to the recognition memory test analysis in the manuscript. Each row corresponds to one memory test trial for one subject (60 trials/subject). Variables are defined as follows: - **Subject:** Subject number. - **Condition:** Indicates whether the recognition memory test followed the Suppression or Control condition. - **Scene:** Image displayed on the recognition test trial. Images with 'memtest' in the file name are memory foil trials. - **ResponseTime:** Time (in milliseconds) between image onset and the subject's response. - **Accuracy:** Accuracy on the current trial (1 = correct, 0 = incorrect). - Maps - **Attention:** Attention maps (.mat files) derived from viewer fixations. - Scene-level analysis Attention maps derived from fixations made within the entire viewing period (12 s). - Fixation analysis Attention maps derived from fixations at each time step in the fixation analysis (3 analyzed, 40 show in figure). In the interest of space, only attention maps for the first three fixations in each scene are included. - **Meaning:** Meaning maps (.mat files) for each scene. - **Saliency**: Saliency map (.mat files) for each scene, generating using [GBVS][3]. - **Visuals:** Images depicting the scene image alongside visualizations of the saliency map, meaning map, and attention maps for the corresponding scene. - Scenes - **Targets:** 30 images of each scene presented in the main experiment. - **Foils:** 30 image foils that were presented only during the recognition memory test. Experiment 2 - Data - Experiment2_Scene-Level_Analysis.txt Data corresponding to the Experiment 2 scene-level analyses in the manuscript. Each row in the tab-delimited file corresponds to one scene in one condition (*articulatory suppression* or *control*). Variable definitions are identical to those for the Experiment 1 file. - Experiment2_Fixation_Analysis.txt Data corresponding to the Experiment 2 scene-level analysis in the manuscript. Each row in the tab-delimited file corresponds to one fixation in scene in one condition (*articulatory suppression* or *control*). Variables are the same as in the scene-level analysis, except that the correlations between meaning and saliency maps are omitted, and there is an additional variable **Fixation** that indicates the current fixation step. - Experiment2_Recognition_Test.txt Data corresponding to the recognition memory test analysis in the manuscript. Each row corresponds to one memory test trial for one subject (120 trials/subject). Image names containing 'memtest' or 'distractor' are memory foils. Variables are the same as those in the Experiment 1 recognition memory test, except for the addition of the **Block** variable indicating which experimental block (1 or 2) the memory test followed. - Maps - **Attention:** Attention maps (.mat files) derived from viewer fixations. - Scene-level analysis Attention maps derived from fixations made within the entire viewing period (12 s). - Fixation analysis Attention maps derived from fixations at each time step in the fixation analysis (3 analyzed, 40 show in figure). In the interest of space, only attention maps for the first three fixations in each scene are included. - **Meaning:** Meaning maps (.mat files) for each scene. - **Saliency**: Saliency map (.mat files) for each scene, generating using [GBVS][3]. - **Visuals:** Images depicting the scene image alongside visualizations of the saliency map, meaning map, and attention maps for the corresponding scene. - Scenes - **Targets:** 60 images of each scene presented in the main experiment. - **Foils:** 60 image foils that were presented only during the recognition memory test. The MATLAB code for meaning map generation is available on the OSF: https://osf.io/654uh/ [1]: https://link.springer.com/article/10.3758/s13421-020-01050-4 [2]: https://podcasts.apple.com/us/podcast/when-scenes-speak-louder-than-words/id1558851257?i=1000513419894 [3]: http://www.vision.caltech.edu/harel/share/gbvs/readme.txt
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.