Home

Menu

Loading wiki pages...

View
Wiki Version:
<h1>An evidence accumulation model of perceptual discrimination with naturalistic stimuli</h1> <h2>Background</h2> <p>Evidence accumulation models have been used to describe the cognitive processes underlying performance across a number of domains. Previous applications of these models have typically involved decisions about basic perceptual stimuli (e.g., lexical decisions). Applied perceptual domains, such as fingerprint discrimination, face recognition or medical image interpretation, however, require the processing of complex visual information and therefore manifest extended response times. However, typical decision-making models applied in these domains, such signal detection theory, do not account for such temporal information, which is closely related to accuracy. We apply a dynamic decision-making model – the linear ballistic accumulator (LBA) – to fingerprint discrimination decisions in order to gain insight into the cognitive processes underlying these decisions.</p> <h2>Study 1</h2> <p>Study 1 provides, to our knowledge, the first demonstration of the LBA as a model of a fingerprint discrimination task. With this experiment, we take the first step in examining whether the LBA can be used to understand fingerprint discrimination decisions. Specifically, we test whether the model can account for the empirical patterns of choice and response times when individuals are faced with the task of discriminating pairs of fingerprints from the same person versus different people.</p> <p>We found that the model provided a good fit of the observed accuracy and full distribution of correct and incorrect RTs. The LBA revealed aspects of the decision-making processes that were not observable from performance data alone. </p> <p>Participants required more evidence before deciding that prints were left by the same finger, compared to deciding they were left by different fingers, indicating that participants were biased towards responding “different,” with the materials used in this experiment. Participants had greater discriminability for different-finger pairs than same-finger pairs, which explains the observed differences in accuracy. The model also revealed that the rate of evidence accumulation for same-finger pairs was more variable compared to difference-finger pairs, which suggests that the processing of same-finger pairs was more heterogenous than different-finger pairs. </p> <p>The differences in rates of evidence accumulation may reflect an inherent property of the contextual features in our set of fingerprint pairs. Specifically, discriminating different-finger pairs can be done so quickly and with little variability because the individual can quickly identify contextual features that immediately preclude a “same” decision. In contrast, discriminating same-finger pairs may require additional and more variable processing because the person needs to conduct an exhaustive evaluation of the contextual features of the stimulus.</p> <h2>Study 2</h2> <p>Study 1 provided initial evidence that the LBA can be used to account for performance in a simple fingerprint discrimination task. However, adequate model fit to observed data is insufficient to establish model validity. To build strong evidence that the LBA can be used to understand fingerprint discrimination, we examine whether the model parameters provide an accurate description of experimental manipulations and change in predictable ways in response to experimental manipulations (Donkin & Brown, 2018). In Study 2, we examine the interpretability of model parameters by manipulating emphasis type (speed vs. accuracy) and noise (no noise vs. noise). In line with the assumptions of selective influence, one may hypothesize that emphasis will affect parameters associated with the termination of the decision process itself (i.e., response threshold), whereas noise will affect parameters associated with the inputs to the decision process (i.e., rate of evidence accumulation; Ratcliff & Rouder, 1998). However, in line with recent findings from studies using basic tasks, we also test for failures of selective influence.</p> <p>We found that noise selectively influenced discriminability. Noise decreased the discriminability of same-finger pairs and mediated the observed decline in accuracy compared to same-finger pairs without noise. In contrast, noise did not affect the discriminability of different-finger pairs and explains why no change in accuracy was observed between conditions. We found similar patterns of results as Study 1 for the effects of the stimulus type and match factor on the variability in the rate of evidence accumulation. In line with the hypothesis made in Study 1, we suspect that the rapid processing of contextual factors inherent in the different-finger pairs used in this experiment explain why we did not observe an effect of noise on different-finger discriminability. Specifically, the noise added to these prints did not obscure the contextual factors which allowed for higher discriminability.</p> <h2>Study 3</h2> <p>In Study 1 and 2, we showed that the LBA can accurately capture fingerprint discrimination performance, and that model parameters can be mapped meaningfully to the underlying cognitive processes they are thought to reflect. We consistently found that participants had a “different” response bias when prints were presented without noise and time pressure, and that discriminability was considerably greater for different-finger pairs compared to same-finger pairs with our particular set of materials. In Study 3, we examine how novices’ decision-making processes evolve over time by using the LBA to quantify any change in decision processes after a feedback training intervention. There are a number of ways that training could affect decision processes. First, feedback may alter response biases as participants learn to adjust their prior expectations of stimuli. Secondly, feedback may increase response caution in order to improve accuracy. Finally, feedback may influence the quality of evidence that is accumulated as the participants learn to attend to more diagnostic print features, and would be mediated by rate parameters (i.e., the mean or variability in the rate of evidence accumulation). In this study, we test for these possible effects by allowing model parameters to vary with group type (training vs. no training) and block (pre- vs post-training).</p> <p>Training appeared to have a complex effect on evidence accumulation, including discriminability and rate variability. For the non-feedback group, the decline in discriminability for different-finger pairs at post-test may have occurred as participants became less sensitive to contextual cues that allow a quick preclusion of a same-finger pairs. This lowest sensitivity to contextual cues may also explain why the variability in the rate of evidence accumulation increased at post-test for the different-finger match accumulator. For the feedback group, training improved participants’ same-finger discriminability, though different-finger discriminability worsened. The rate variability for the different-finger match accumulator also increased after having received training such that it was greater than the rate variability for the same-source match accumulator. This pattern of results suggests that training may have led participants to attend to subtle information that was more diagnostic for same-finger print pairs and give lesser attention to contextual cues which benefit the effective discrimination of different-finger print pairs. </p> <h2>Discussion</h2> <p>Across three experiments, we show that the LBA provides an accurate account of performance. The model was able to accurately describe the fingerprint discrimination decision processes with manipulations in visual noise, speed-accuracy emphasis, and training. Our results demonstrate that the LBA is a promising model for further understanding complex perceptual discrimination decisions.</p>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.