Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# Searching for diagnostic features: A skill acquired by domain experts ## Collaborators Samuel Robson and Jason Tangen ## Rationale People can achieve great things, and the highest levels of human achievement remain a fascination to laypeople and scientists alike. We marvel at those at the top of their fields in domains like music, art, science, and sport. However, the science of expertise can provide key insights into how such feats are achieved by investigating ways that experts and novices differ. Visual expertise is just one area where we can study these differences. Identifying, classifying, and matching objects in the visual environment are valuable survival skills that everyone possesses; these abilities allow us to navigate the world by taking what we know and generalising from one instance to the next. These more general (and likely adaptive) abilities can also lead us to acquire more specific skill sets as we gain further experience with various classes of stimuli. Many professions involve interacting with the visual environment: driving cars, directing air traffic, examining x-rays, playing chess, and matching fingerprints. The more general perceptual abilities that we all possess provide us with potential to become skilled in these domains, which contain complex structure. Visual expertise forms as perceivers encounter more of the regularities and irregularities within this structure. In visual domains, expertise generally requires two abilities: identifying the locations of features that are relevant to the task, and then determining the most appropriate action to take with this information (Mozer and Pashler, 2012). Novices can acquire these skills through trial and error. However, relying on the knowledge of experts may make this process quicker and more effective. With practice in a domain, novices can become more attuned to the relevant features and structural relations that define categories and identities, and over time, they begin to extract these features and relations more efficiently. This perceptual learning is the bedrock of many expert domains. The present study aims to examine the information and processes that expertise grants those who spend their time honing their skills in such domains. Fingerprints, in particular, offer an interesting natural stimulus set to investigate ways in which professional experts differ from novices. Contrary to popular belief, computer algorithms are not relied upon to decide whether two fingerprints match or not, rather it is the task of a small group of highly trained human examiners throughout the world’s police and security agencies. Fingerprints are great for testing the nature of expertise because we can compare human experts to everyone else who has no experience with them whatsoever, and use this expert knowledge to train people who are new to the field. Fingerprints are also ideally suited because the field of forensics currently faces questions over the scientific reliability and validity of their techniques (Cole, 2008; Risinger, 2010; Saks, 2010). In the past, a common belief was that forensic examiners were unbiased and objective in their decision making, but several experiments have shown they are fallible; they make mistakes and are influenced by extraneous information (Busey & Dror, 2011; Busey & Loftus, 2007; Dror, 2017; Dror, Kukucka & Zapf, 2018; Edmond, Tangen, Searston & Dror, 2015;). The novelty of fingerprints to the general population, and the incentive to study forensic techniques with greater scientific rigour, therefore offers an ideal backdrop for studying visual expertise. Fingerprint experts have shown several abilities that are far superior than those of novices (see for example Searston & Tangen, 2017; Thompson & Tangen, 2014). Most of these abilities are domain-specific and non-analytic in nature. However, very little work has been conducted on their analytic capabilities, which most experts claim is a process more in line with their everyday decision making. When peering over the shoulder of an expert, they will carefully plot various details on a fingerprint and then compare whether another print contains these same details. However, one study found that there is little inter- or intra-observer consistency in the number of features each expert chooses to plot (Dror et al., 2011). But, compared to what? It's not clear how these experts compare to novices when faced with a task like this. In the present experiment, we want to further explore how novices compare to experts if they were to choose features for comparison. Specifically, we want to see whether there are differences in the location and dispersion of the features that experts and novices consider to be most and least useful. We plan then to use these feature choices to test the visual search abilities of experts relative to novices in a subsequent experiment, with the aim to test whether experts are more sensitive, and have an easier time, spotting the more diagnostic or useful features. ## Design In Experiment 1, we aim to find out whether experts and novices differ in what features, details, or areas of a fingerprint they think are useful, and those they regard as useless. To examine these differences, we plan to conduct an independent groups PERMANOVA and PERMDISP on the points that experts and novices mark up on each of our 100 fingerprints using coloured pencils. We will present each print on a sheet of white A4 paper and then score their responses by measuring the coordinates of the points that each participant chooses by overlaying a grid printed on a transparent sheet on each print. We will assess whether the distribution and location of ‘useful’ and ‘useless’ features varies between the novice and expert groups. ## Participants We are planning to run a total of 30 novices and 30 experts in Experiment 1, which is limited by the time that our expert participants can contribute to the experiment. The data from each participant will be converted into digital form through LiveCode Community 9.0.0. The experimental software will generate a single plain text file for each participant, which we will upload to the OSF. Participants who fail to complete the entire experiment for reasons such as fatigue or illness, will not have their responses recorded. ## Procedure Each participant will be given a booklet containing 100 fingerprint images (one print per page). They will be asked to plot one point at the centre of an area or feature on each image, which they think would be useful for distinguishing that fingerprint from other fingerprints, and also one point at the centre of a useless area or feature that would be unhelpful in distinguishing that fingerprint from other fingerprints (the study overview and instructions are presented below). ![Study Overview and Instructions][1] ### Materials The 100 fingerprint images used will be from our ground truth database and cropped to a square such that the entire image is filled with ridge detail (more details can be found in the *Materials and Experiment* section). We randomised the order of the images using a Page Shuffle application (each version was shuffled 20 times) so that there are six different versions of the booklet. These images are all presented on white A4 paper (80gsm). All up, there will be 50 latent and 50 tenprint images and 10 images of each finger-type (e.g. left thumb, right middle finger). No two fingerprints will be from the same source. Participants will also be given a red and green pencil to mark up the points on the image (see below; left). ![Example procedure and data collection method][2] ### Data Collection to date We have provided our expert fingerprint examiners with the booklets and other booklets to novice participants, but we have not yet examined any of the results from the booklets that have been returned to us. ### Timeline Data collection will be completed by 27 June 2018. ### Planned Analyses We will obtain the coordinates of all the responses from experts and novices for each of the prints using a transparent 50x50 grid that we will overlay onto each image (above; right). To compare the two groups, we will use the coordinates to conduct a PERMANOVA and PERMDISP analysis to see whether the points that experts choose differ significantly from the points that novices choose in their location and dispersion. We will run this analysis for each of the 100 prints. Additionally, we will compute the percentage of prints for which there was a group difference. We will also use these points to create heat maps from each group for a few images to illustrate the differences in the way each group tended to mark up an image, and what they perceived to be useful and useless. ### Hypotheses We predict, generally, that the points experts choose will differ significantly in location and dispersion to the points that novices choose. We think there will be less dispersion between the expert points (i.e., more consensus) compared to points chosen by novices. We also think that there will be a difference in the location and dispersion of the useless points as chosen by novices and experts. Again, we predict that the points identified by experts will have less dispersion than novice points. However, we are less confident in this latter prediction because both experts and novices might simply choose areas of a print that are the most noisy. ## References Busey, T. A., & Dror, I. E. (2011). Special abilities and vulnerabilities in forensic expertise. Friction Ridge Sourcebook. NIJ Press: Washington, DC. Busey, T. A., & Loftus, G. R. (2007). Cognitive science and the law. Trends in cognitive sciences, 11(3), 111-117. doi:10.1016/j.tics.2006.12.004 Cole, S. A. (2008). The ‘opinionization’of fingerprint evidence. BioSocieties, 3(1), 105-113. doi:10.1017/S1745855208006030 Dror, I. E. (2017). Human expert performance in forensic decision making: seven different sources of bias. Australian Journal of Forensic Sciences, 1-7. doi:10.1080/00450618.2017.1281348 Dror, I. E., Champod, C., Langenburg, G., Charlton, D., Hunt, H., & Rosenthal, R. (2011). Cognitive issues in fingerprint analysis: inter-and intra-expert consistency and the effect of a ‘target’comparison. Forensic Science International, 208(1), 10-17. doi:10.1016/j.forsciint.2010.10.013 Dror, I. E., Kukucka, J., Kassin, S. M., & Zapf, P. A. (2018). When Expert Decision Making Goes Wrong: Consensus, Bias, the Role of Experts, and Accuracy. Journal of Applied Research in Memory and Cognition, 7(1), 162-163. Edmond, G., Tangen, J. M., Searston, R. A., & Dror, I. E. (2015). Contextual bias and cross-contamination in the forensic sciences: the corrosive implications for investigations, plea bargains, trials and appeals. Law, Probability and Risk, 14(1), 1-25. doi:10.1093/lpr/mgu018 Mozer, M. C., Pashler, H., Lindsey, R., & Jones, J. (2012). Efficient training of visual search via attentional highlighting. Submitted for publication. Risinger, D. M. (2010). The NAS/NRC report on forensic science: A path forward fraught with pitfalls. Utah L. Rev., 225. Saks, M. J. (2010). Forensic identification: from a faith-based “Science” to a scientific science. Forensic science international, 201(1), 14-17. doi:10.1016/j.forsciint.2010.03.014 Searston, R. A., & Tangen, J. M. (2017). The style of a stranger: Identification expertise generalizes to coarser level categories. Psychonomic Bulletin & Review, 24(4), 1324-1329. doi:10.3758/s13423-016-1211-6 Thompson, M. B., & Tangen, J. M. (2014). The nature of expertise in fingerprint matching: experts can do a lot with a little. PloS one, 9(12), e114759. doi:10.1371/journal.pone.0114759 [1]: https://files.osf.io/v1/resources/rxe25/providers/osfstorage/5b14b2e0f1f288000f6413ab?mode=render [2]: https://files.osf.io/v1/resources/rxe25/providers/osfstorage/5b14b518f1f288001063b41a?mode=render
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.