Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# Superior working memory for fingerprints: An investigation of fingerprint expertise # *The contributors to this project are listed alphabetically until the resulting manuscript has been submitted for publication.* ## Rationale Superior working memory capacity is one of the most generalisable mechanisms that mediates expert performance and has been frequently used as a discriminating measure of expertise across multiple domains (Ericsson, 2018). For instance, in a classic demonstration of chess expertise, Chase and Simon (1973) found that chess masters could reconstruct briefly presented chessboard positions from memory with near perfect accuracy. However, this superior memory feat was only evident when the chess positions were taken mid-game (i.e., the positions of the chess pieces were derived from legal chess moves). Placing pieces on the board at random resulted in no difference between chess masters and novices in their memory ability. Chase and Simon (1973) explained that the superior memory for briefly presented chess positions was not due to any general memory ability, such as photographic memory, but depends critically on the individual's ability to perceive meaningful patterns and relations between chess pieces. In other words, chess masters are better able to integrate the information about the chess pieces and positions, thereby improve their memory for the chessboard as a whole. This finding of superior expert memory for representative situations has been replicated in a variety of areas including medicine (Eva et al., 2002), computer programming (McKeithen, Reitman, Rueter, & Hirtle, 1981), dance (Starkes, Deakin, Lindley, & Crisp, 1987) and bridge (Charness, 1979). ## The Current Project In the current project, we aim to investigate whether this expert working memory advantage for domain relevant stimuli is evident in expert forensic fingerprint examiners. Prior work has provided preliminary evidence of a superior short-term memory capacity of fingerprint examiners (Thompson & Tangen, 2014). In our previous work on memory for fingerprints, Thompson and Tangen (2014) tested expert vs novice short-term memory for prints by presenting a latent ("crime-scene") print on the screen for 5 seconds, asked participants to count to five out loud to prevent them from verbally encoding the features in the first image, and presented a second fully rolled tenprint on the screen either from the same finger (target) or from different fingers (distractor). We found that experts matched prints more accurately than novices, especially when the prints were similar, but nonmatching. Experts were conservative but still able to discriminate pairs of matching and similar nonmatching prints that were separated by five-seconds. In another experiment by Thompson and Tangen (2014), we tested expert vs novice "long"-term memory for prints (relatively speaking). In the learning phase of this experiment, 50 fingerprint images were displayed on screen, one-by-one, for 5 seconds each. Participants were asked to learn the images as best they could and that they would be tested on their ability to recognize the images later. Following the learning phase, participants completed a word-scramble filler task for five minutes. In the test phase, we presented 50 fingerprint images one-by-one, along with the question, “Have you seen this print before,” and “Yes” and “No” response buttons. Fifty of the fingerprint images were old (i.e., they had been presented in the learning phase) and 50 of the fingerprint images were new (i.e., they had not been presented in the learning phase). The old images in the test phase were not simply the same picture displayed again but, rather, a novel instance of an image of the same finger from the same person (i.e., two “matching” prints are two impressions from the same finger taken at different time). Overall, long-term recognition memory for experts and novices was the same. That is, both experts and novices performed around the level of chance. See the results from these two experiments in the figure below. ![Thompson and Tangen (2014) memory experiment results][1] After a meeting with one of our fingerprint expert partners, we suspected that an increase in working memory capacity can be related to the way in which examiners approach the task. They start by studying the latent print, mapping out its distinguishing features. They then compare their mapped features of the latent print with that of the fully rolled tenprint. It appeared to us that they were relying on working memory to accurately reflect the features of the latent onto the rolled print for comparison, but this process was largely analytic. That is, examiners took some time to process the particular ridge pattern configurations, their absolute and relative positions, and often used verbal labels to do so. Following this meeting, we hypothesised that superior working memory for fingerprints might be one of the mechanisms that can account for how they are able to generate their superior actions in representative situations, and thought back to our previous tests of expert memory, which only used a 5 second presentation of the prints, and a "simulataneous counting task" to prevent verbal encoding. We designed this current project to extend these preliminary investigations of recognition memory performance of experts and non-experts (see the participant instructions video below for an overview of the task and some example trials). @[youtube](https://youtu.be/1AQBdbytxro) ## Participants ### Experts Data will be collected from as many fingerprint examiners as possible, subject to their availability. Our goal is to reach 30 expert participants. We define an ‘expert’ as one who is appropriately qualified in their jurisdiction, and is court-practicing. We will also collect data from professional fingerprint examiners who are not yet fully qualified. These examiners are often referred to as ‘trainees’. These data will not be analysed in this experiment, but may be used in a future project. ### Novices Data will be collected from an equal number of novices as experts. So, for example, if we collect data from 34 experts, then we will collect data from 34 novices. Novice participants who have no formal experience in fingerprint examination will be recruited from The University of Adelaide, The University of Queensland, and/or Murdoch University communities, and the general Australian public. Novice participants may be offered a cash or gift card incentive for their participation, or participate for course credit, with the chance to win an additional amount on reaching a threshold level of performance, and on completing a larger series of fingerprint tasks, including this one. Our aim is to recruit novices who are motivated to perform well. ### Sensitivity Analysis Based on previous expert-novice studies in this domain, we anticipate a large difference in performance between professional fingerprint examiners and novices (*d* > .80). Twenty four observations from 30 participants per group provides sufficient sensitivity (power = .814) to detect an effect size of *d* = 0.45. ### Exclusion Criteria Participants who respond in less than 500ms on more than 20 percent of trials will be excluded. If a participant fails to complete the experiment because of fatigue, illness, or excessive response delays (i.e., longer than the session allows for), then their responses will not be recorded. Data from participants who provide the same response (e.g. the first print in the array) consecutively on more than 20 percent of trial will also be excluded. If a participant is excluded on the basis of any of these criteria, then another participant will be recruited to take their place. Participants are also required to bring and use any necessary aids, including glasses, contact lenses, and hearing aids. ### Data Collection No human data have been collected for this experiment yet. We are visiting various police departments around Australia in February 2019, and plan to have all expert and novice data collected by the end of the year. ## Design This experiment employs a one-way (Expertise: expert, novice; between subjects) design ‘yoked' to expertise. 64 unique participant sequences have been pre-generated, to allow for as many expert participants as possible. Each sequence consists of 24 trials. On each trial, a latent crime scene print is presented on the screen for 30 seconds. After the 30 seconds has elapsed, 10 rolled prints are presented sequentially. One of these 10 prints is the target that belongs to the same finger as the crime scene print. The other 9 rolled prints are distracters sampled from different fingers of the *same* person (presented in random order). The 64 pre-generated participant sequences can be downloaded in the [Sequences][2] component of this project. ## Materials The latent (“crime scene”) prints were collected from undergraduate students and lifted from various surfaces (e.g., wood, plastic and glass) to create realistic crime-scene fingerprints. The target and distractor prints were created by collecting fully-rolled impression from each of the fingers of same participants. For this experiment, we randomly selected one latent print from 72 individuals, as well as their corresponding fully-rolled ten prints. In a pilot experiment on 5 members of the lab, we presented a single latent print followed by 5, 7, or 9 fully rolled distractor prints along with a single target rolled print, and we manipulated the similarity of these distractors by presenting rolled prints from the same person who left the latent/target or prints selected at random. We found 9 similar distractor prints and 1 target to be the optimal level of difficulty to ensure that experts won’t perform at ceiling. From the small sample we collected, novices in this condition were performing at 30% correct (chance being 10%), so we expect this task to be difficult, but certainly not impossible for experts. ## Procedure Participants first read an information sheet, and watch an instructional video about the task with examples (see above). This experiment includes 24 trials. Within each trial, participants are presented with a latent (“crime scene”) print that remains on the screen for 30 seconds. Participants are instructed to study the print while a visible timer counts down from 30. After this 30 second study time elapses, the latent print disappears and 10 fully-rolled prints appear sequentially on the screen. One of the 10 prints is the target, matching the the same finger as the latent print. The other 9 prints are distractors that were sampled from different fingers of the *same* individual. The order of the target and distractors are shuffled randomly on every trial. Participants are instructed to “use your memory and click on the matching print”. After pressing one of the 10 rolled prints, the participant will be asked if they are sure with their selection, providing them with one opportunity to change their response. Once they have made a selection, corrective feedback will appear for 500 milliseconds on the screen before the next trial begins. The prints will remain on the screen until the participant makes a selection. A text prompt appears during the inter-trial interval if participants take longer than 20 seconds to respond, with the message: “try to decide in less than 20 seconds.” ## Software The video instructions and Print recognition task are presented to participants on a 13-inch MacBook Pro or MacBook Air laptop screen, with over-ear headphones. The software used to generate the trial sequences, present stimuli to participants, and record their responses was developed in LiveCode (version 9.0.2; the open source ‘community edition’). The LiveCode source files, experiment code, and all the materials needed to run the experiment locally can be downloaded in the [Software][3] component of this project. The data analytic scripts and plots for this project are produced in RStudio with R Markdown and a list of the packages needed to reproduce our plots and analyses are listed in the data visualisation and analysis html file in the [Analyses][4] component of this project. ## Hypotheses and Predictions In this current examination of recognition memory in fingerprint examiners, participants are presented with each latent print for 30 seconds, which is plenty of time for uninterrupted analysis and verbal encoding, followed by ten fully rolled prints for them to choose from at their own pace (within a 20 second window). Unlike the previous investigation of novice vs expert short- and long-term memory (results illustrated above), we expect that by providing people with 30 seconds (rather than 5) to examine the latent print, and eliminating the simultaneous counting task, experienced examiners are likely to spend this time encoding the particular ridge characteristics in each latent print, as they've learned throughout their training. So their performance on this task will likely reflect their relative experience with analysing fingerprints. As such, we've made the following predictions: 1. In recognising old prints, we expect novice proportion correct scores to be better than chance resulting in a medium effect size (_d_ = .2 to .5), and expert proportion correct scores to be better than chance resulting in a large effect size (_d_ > .5). 2. When comparing novices to experts, we expect experts to outperform novices resulting in a large effect size (_d_ > .5) in the difference between their proportion correct scores. ## Planned Analyses ### Comparison to Chance To test experts’ and novices’ print recognition, we will compute their mean proportion correct over the 24 trials. One-sample *t*-tests (or nonparametric equivalent if distributional assumptions are not met) will be used to determined whether each group performs significantly above chance (10%). ### Expert-Novice Comparison A between groups *t*-test (or nonparametric equivalent if distributional assumptions are not met) on mean proportion correct will be used to test the extent to which expert fingerprint examiners outperform novices at the print recognition task. ### Exploratory Analyses Although we don’t have any strong predictions about response time, we will compute participants’ Rate Correct Score (RCS) as an integrated speed-accuracy measure (Woltz & Was, 2006) that expresses the proportion of correct responses produced per second. We plan to conduct exploratory analyses comparing experts and novices in this rate correct measure to see if any observed differences in accuracy remain when taking response time into account. ### Simulated Data and Analyses To pilot our experiment and data analysis script, we ran 12 ‘sim’ experts and novices through the experiment. These simulated participants are programmed to respond pseudorandomly on each trial in the experiment. We used the .txt files produced by these simulations to generate and test a LiveCode data extraction tool, and an R Markdown analysis script for plotting and analysing the data in this project. They provide a useful model of the null hypothesis that people cannot reliably identify a matching print from memory above chance. If their performance doesn’t match that which is to be expected by chance (e.g., .10 in this case), there is a probably a bug in our code. The simulated data, LiveCode data extraction tool, R Markdown source files, resulting plots, and analysis script can be downloaded in the Analysis component of this project. A few plots from these simulated data in the current project are presented below, reflecting the appropriate level of chance: ![Simulated data][5] ### Ethics We have ethics clearance from human research ethics committees at The University of Queensland for the project titled “Identifying perceptual experts in fingerprint identification” (Approval Number: 2018001369), The University of Adelaide: (33115), and Murdoch University (2018/149). ### Updated analysis We have refined our analysis approach based on further review and expert feedback. Initially, we proposed using t-tests for analysing proportion data. However, considering the binary nature of our data (correct/incorrect responses), we have updated our plan to employ z-tests for proportions. This change ensures a more appropriate statistical treatment of our data, aligning with its distribution and enhancing the precision of our findings. This update reflects our commitment to methodological rigor and transparency in our research process.*See analysis component for updated analysis scripts* ## References Charness, N. (1979). Components of skill in bridge. Canadian Journal of Psychology, 33, 1–16. Ericsson, K. (2018). Superior Working Memory in Experts. In K. Ericsson, R. Hoffman, A. Kozbelt, & A. Williams (Eds.), The Cambridge Handbook of Expertise and Expert Performance(Cambridge Handbooks in Psychology, pp. 696-713). Cambridge: Cambridge University Press. doi:10.1017/9781316480748.036 Eva, K. W., Norman, G. R., Neville, A. J., Wood, T. J., & Brooks, L. R. (2002). Expert-novice differences in memory: a reformulation. Teaching and learning in medicine, 14(4), 257-263. McKeithen, K. B., Reitman, J. S., Rueter, H. H., & Hirtle, S. C. (1981). Knowledge organization and skill differences in computer programmers. Cognitive Psychology, 13, 307–325. Starkes, J. L., Deakin, J. M., Lindley, S., & Crisp, F. (1987). Motor versus verbal recall of ballet sequences by young expert dancers. Journal of Sport Psychology, 9, 222–230. Thompson, M. B., & Tangen, J. M. (2014). The nature of expertise in fingerprint matching: experts can do a lot with a little. PloS one, 9(12), e114759. [1]: https://files.osf.io/v1/resources/qy2su/providers/osfstorage/5c54b4d776653c001a21d96d?mode=render [2]: https://osf.io/awcmd [3]: https://osf.io/vruhs [4]: https://osf.io/aeyqc [5]: https://files.osf.io/v1/resources/qy2su/providers/osfstorage/5c54cd9c832ca60018a638b2?mode=render
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.