Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This project encompasses the *data*, *scripts* and *links* to the publications that originated as a part of my cumulative disseration "A Quantification of Visual Salience". The goal of this dissertation is to provide a meaningful measure of visual salience. In this context, salience is understood as the influence of task-irrelevant physical contrasts on visual attention. This goal is pursued by a modeling and empirical approach involving Bundesen's theory of visual attention (TVA) and the experimental paradigm of temporal-order judgment (TOJ). A solution is presented that works with a wide range of different physical contrasts. The developed hierarchical graphical Bayesian model allows to estimate the strength of salience quantitatively and for groups as well as individuals. The dissertation comprises four articles. Three of them are situated in cognitive experimental psychology with an additional modeling part whereas one is focused more abstractly on how different types of data analyses contribute to psychological scientific explanations. The empirical and modeling work took place in the project “Modelling salience within the Theory of Visual Attention” funded by the German Research Agency (DFG, Deutsche Forschungsgemeinschaft), Grant 1515/5-1 to Ingrid Scharlau. **Parts of cumulative dissertation** Article 1: Krüger, A., Tünnermann, J., & Scharlau, I. (2016). Fast and Conspicuous? Quantifying Salience With the Theory of Visual Attention. Advances in Cognitive Psychology, 12(1), 20–38. https://doi.org/10.5709/acp-0184-1 Article 2: Krüger, A., Tünnermann, J., & Scharlau, I. (2017). Measuring and modeling salience with the theory of visual attention. Attention, Perception, & Psychophysics, 1–22. https://doi.org/10.3758/s13414-017-1325-6 Article 3: Krüger, A., Tünnermann, J., Rohlfing, K., & Scharlau, I. (2018). Quantitative Explanation as a Tight Coupling of Data, Model, and Theory. Archives of Data Science, Series A (Online First), 5(1), A10, 27 S. online. https://doi.org/10.5445/KSP/1000087327/10 Article 4: Krüger, A. & Scharlau, I. (in review). The time course of salience---not entirely caused by salience. Psychological Research. **Script** The script serves two purposes. It explains the used model by interactive graphics and an introductory text. Afterward follows a full implementation of different models including these used in the articles so that the data can be re-analysed. It is downloadable as a .zip file containing the juypter notebook and needed files/folders. A jupyter notebook is an interactive version of a python script file that combines code, text, images, and output. Required packages can easily be installed with *pip*. A file, "requirements.txt", holds all required python packages. If you are unfamiliar with setting up a python environment, a distribution including python and many packages and tools like anaconda (www.anaconda.com) may be helpful (especially if you are using Windows). It is important to note that the published models are *one* way of implementing them. E.g., earlier we implemented our models in R and used JAGS or STAN. Because of these and other technical reasons results reported in the paper and computed by the published script might differ. If you want to repeat the analyses reported in the published articles, please refer to them for the exact model specification and repeat the analysis accordingly. For Article 4, a further script has been added to make the reported model comparisons replicable. Also, a .zip file contains all results including graphics computed by the script. **Data** The data format is tailored towards the needs for documenting TOJs. In TOJs the order of two event has to be judged by the participant. They have to decide which of two things happened first (or second depending on the design). To distinguish both events, the respective stimuli are called probe (potentially salient) and reference (never salient). Both events are separated by a stimulus onset asynchrony (SOA; strictly speaking, there is not only an onset in the design so the name is a bit of a legacy. See Article 1 for an empirical comparison of different events.). The apparently odd choice to start numbering of participants and conditions with 0 is because of consistency within the accompanying script: Most programming languages, unlike R, start numbering with 0 so that two IDs would have to be used within the script. Such external ID in the file, i, and the internal ID, i-1, can easily be confused if estimated parameters are compared to raw data thus we decided to use 0 as the first ID which than can be used consistently for the same person or condition within the whole script file. In the data format, a trial corresponds to a row. The columns are the following. *Participant*: Participant ID (starting from 0 so that it can be used as Index in python) *Condition*: Condition ID (starting from 0 so that it can be used as Index in python) *Condition_name*: Name or explanation of condition so that its number can be associated with one of the conditions reported in the paper. *SOA*: stimulus onset asynchony, technically the asynchony between the two flicker events. See Article 1 for the reason why not to use onsets. *Repetitions*: Number of repetitions *Count*: Number of probe-first-judgments for all repetitions. Participants were asked to report which of two stimuli, probe or reference, flickered first. Probe is potentially salient whereas reference is identical to multiple background elements. Probe-first-judgment means that the participant indicated that something (flicker) happened to stimulus called probe before it happened to th stimulus called reference. *Relative*: Simply *Count* divided by *Repetitions*
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.