Main content

This registration is a frozen, non-editable version of this project

Home

Menu

Loading wiki pages...

View
Wiki Version:
# Expertise in fingerprint discrimination *The contributors to this project are listed alphabetically until the resulting manuscript has been submitted for publication.* ## Rationale In previous experiments, we have provided the first empirical tests of fingerprint identification, finding that examiners possess genuine expertise in distinguishing matching from non-matching fingerprints (Tangen, Thompson & McCarthy, 2011). We have replicated these basic findings several times now including genuine crime scene materials (Thompson, Tangen & McCarthy, 2013), under time constraints and noisy materials (Thompson, Tangen & McCarthy, 2014). Expert fingerprint examiners reliably outperform novices on these basic match/no-match tasks, particularly for highly similar non-matching pairs of prints. In these previous experiments, expert performance was extremely high: 92% hits <1% false alarms (Tangen, Thompson & McCarthy, 2011), and each participant was presented with an identical set of matching and non-matching fingerprint pairs, even when using genuine crime scene prints (Thompson, Tangen & McCarthy, 2013). The broad aim of our current research program is to turn fingerprint novices into experts more quickly by developing more rigorous training practices, but we first need to identify the leading practitioners in the field so we have a clear picture of what makes them exceptional (see “Who are the elite practitioners of fingerprint examination?” preregistration). So we need to develop a fingerprint matching task that’s sufficiently difficult to help separate examiners with different (extremely high) levels of competency. In this current project, we have created 48 pairs of fingerprints (24 from the same print and 24 from different prints), from actual case files, which have been selected to be extremely difficult to distinguish. These prints were developed in collaboration with our partners at the Queensland Police Service and have been hand picked from actual case files over several months to challenge even the most experienced examiners. ## Participants ### Experts Data will be collected from as many fingerprint examiners as possible, subject to their availability. Our goal is to reach 30 expert participants. We define an ‘expert’ as one who is appropriately qualified in their jurisdiction, and is court-practicing. We will also collect data from professional fingerprint examiners who are not yet fully qualified. These examiners are often referred to as ‘trainees’. These data will not be analysed in this experiment, but may be used in a future project. ### Novices Data will be collected from an equal number of novices as experts. So, for example, if we collect data from 34 experts, then we will collect data from 34 novices. Novice participants who have no formal experience in fingerprint examination will be recruited from The University of Adelaide, The University of Queensland, and/or Murdoch University communities, and the general Australian public. Novice participants may be offered a cash or gift card incentive for their participation, or participate for course credit, with the chance to win an additional amount on reaching a threshold level of performance, and on completing a larger series of fingerprint tasks, including this one. Our aim is to recruit novices who are motivated to perform well. ### Sensitivity Analysis Based on previous expert-novice studies in this domain, we anticipate a large difference in performance between professional fingerprint examiners and novices (*d* > .80). Forty-eight observations from 30 participants per group provides sufficient sensitivity (power = .829) to detect an effect size of *d* = 0.45. ### Exclusion Criteria Participants who respond in less than 500ms on more than 20 percent of trials will be excluded. If a participant fails to complete the experiment because of fatigue, illness, or excessive response delays (i.e., longer than the session allows for), then their responses will not be recorded. Data from participants who provide the same response (e.g. match) consecutively on more than 50 percent of trials will also be excluded. If a participant is excluded on the basis of any of these criteria, then another participant will be recruited to take their place. Participants are also required to bring and use any necessary aids, including glasses, contact lenses, and hearing aids. ### Data Collection No human data have been collected for this experiment yet. We are visiting police departments around Australia in February 2019, and plan to have all expert and novice data collected by the end of the year. ## Design This experiment employs a one-way (Expertise: expert, novice; between subjects) design ‘yoked' to expertise. 64 unique participant sequences have been pre-generated, to allow for as many expert participants as possible. Each sequence contains 24 trials. On each trial, a fingerprint is presented to participants in the centre of the screen along with a 12-point scale ranging from 1 (Sure Different) to 12 (Sure Same), which forces a decision about whether the prints came from the same finger or two different fingers (see Thompson, Tangen & McCarthy, 2013 for discussion of design and methodology). Half of the trials are “target” trials (12 matching pairs) and half are “distractors” (12 non matching pairs), which are selected at random from a larger set of 24 targets and 24 distractors for each participant. Novices and experts will be presented with the same set of random event sequences, so that the two groups are perfectly matched on the images they see in the experiment, and the order in which they see them. The first novice participant will see an identical series of prints to the first expert participant, the second novice participant will see an identical series of prints to the second expert participant, and so on. The 64 pre-generated participant sequences can be downloaded in the [Sequences][1] component of this project. ## Procedure Participants first read an information sheet about the project, and watch an instructional video about the task with examples. They are then presented with 24 fingerprint images one at a time along with the 12-point rating scale as illustrated in the instruction video below: @[youtube](https://youtu.be/n1sGnRZUW1M) After making a response, immediate feedback is provided in the form of an audible tone and a green check mark (correct) or red X (incorrect). There is a 750 millisecond window for the feedback, where the image remains on screen, and then 500 milliseconds of blank screen before the next print appears. All prints remain on the screen until the participant provides a response. A text prompt appears during the inter-trial interval if participants take longer than 30 seconds to respond, with the message: “Please try to make your choice in less than 30 seconds.” Since our long term goal is to develop training methods along with pre- and post-test measures that can be conducted quickly and easily, we are asking participants to limit their judgements to 30 seconds. Of course, in practice, expert fingerprint examiners could take hours or even days to make a decision, our previous work has shown that experts are capable of performing very well even when examining pairs of prints for 2 seconds (Thompson, Tangen & McCarthy, 2014). The goal here isn’t to approximate expert performance in the field, but to devise an excellent diagnostic tool that can quickly and easily distinguish between novices and experts with different levels of expertise with fingerprints. ## Software The video instructions and Print Matching task will be presented to participants on a 13-inch MacBook Pro or MacBook Air laptop screen, with over-ear headphones. The software used to generate the trial sequences, present stimuli to participants, and record their responses was developed in LiveCode (version 9.0.2; the open source ‘community edition’). The LiveCode source files and experiment code can be downloaded in the [Software][2] component of this project. The data analytic scripts and plots for this project are produced in RStudio with R Markdown and a list of the packages needed to reproduce our plots and analyses are listed in the data visualisation and analysis html in the [Analyses][3] component of this project. ## Hypotheses and Predictions Following the results from our previous print matching experiments with novices and experts, we predict the following: 1. In discriminating between fingerprints, we expect novice AUC scores to be better than chance resulting in a large effect size (_d_ > .5), and expert AUC scores to be better than chance also resulting in a large effect size (_d_ > .5), but larger than the effect size for novices. 2. When comparing novices to experts, we expect experts to outperform novices resulting in a large effect size (_d_ > .5) in the difference between their AUC scores. ## Analyses ### Comparison to Chance To test participants’ perceptual sensitivity to matching fingerprints, we will compute their empirical area under the curve (AUC) based on their hits and false alarms and confidence ratings. One-sample t-tests (or nonparametric equivalent if distributional assumptions are not met) will be conducted comparing each group’s AUC scores to chance (.5). ### Expert-Novice Comparison To test whether expert fingerprint examiners outperform novices print discrimination, a between-groups t-test (or nonparametric equivalent if distributional assumptions are not met), will be conducted comparing expert’s and novices’ AUC scores. ## Exploratory Analyses Although we don’t have any strong predictions about response time or rate correct, we will compute participants’ Rate Rate Correct Score (RCS) as an integrated speed-accuracy measure (Woltz & Was, 2006) that expresses the number of correct responses produced per second. We plan to conduct exploratory analyses to comparing experts and novices in this rate correct measure. ## Simulated Data and Analyses To pilot our experiment and data analysis script, we ran 12 “sim” experts and novices through the experiment. These simulated participants are programmed to provide a random response on each trial in the experiment (i.e., a random rating of 1 to 12 at a random response time between 0 and 30000 milliseconds). We used the .txt files produced by these simulations to generate and test a LiveCode data extraction tool, and an R Markdown analysis script for plotting and analysing the data in this project. They provide a useful model of the null hypothesis in our first analysis that “people cannot reliably identify the odd print out above chance.” They are also helpful for debugging our experiment code and analysis script. If the sim participants’ performance doesn’t reflect that which is to be expected by chance (e.g., AUC = 0.5 in this case), then we know that something has gone awry. The simulated data, LiveCode data extraction tool, R Markdown source files, resulting plots, and analysis script can be downloaded in the Analysis component of the project. A few plots from these simulated data in the current project are presented below, reflecting the appropriate level of chance: ![Simulated accuracy][4] ![Simulated ROC curves][5] ## Ethics We have ethics clearance from human research ethics committees at The University of Queensland for the project titled “Identifying perceptual experts in fingerprint identification” (Approval Number: 2018001369), The University of Adelaide: (33115), and Murdoch University (2018/149). ## References Tangen, J. M., Thompson, M. B., & McCarthy, D. J. (2011). Identifying fingerprint expertise. Psychological science, 22(8), 995-997. Thompson, M. B., Tangen, J. M., & McCarthy, D. J. (2013). Expertise in fingerprint identification. Journal of forensic sciences, 58(6), 1519-1530. Thompson, M. B., Tangen, J. M., & McCarthy, D. J. (2014). Human matching performance of genuine crime scene latent fingerprints. Law and human behavior, 38(1), 84. Woltz, D. J., & Was, C. A. (2006). Availability of related long-term memory during and after attention focus in working memory. Memory & Cognition, 34(3), 668-684. [1]: https://osf.io/7xdas [2]: https://osf.io/hqwp6 [3]: https://osf.io/2pb3a [4]: https://files.osf.io/v1/resources/2dz8k/providers/osfstorage/5c54d214832ca60017a64d3e?mode=render [5]: https://files.osf.io/v1/resources/2dz8k/providers/osfstorage/5c54d23ce16f550018871b13?mode=render
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.