Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# What is the optimal 'target' size for fingerprint-like visual search tasks? ## Rationale Vital visual skills like identifying, classifying, and matching objects enable humans to navigate a world in which they continually encounter new stimuli. They also enable people to acquire skills and gain genuine expertise in domains that involve complex visual structure, like air-traffic control, radiology, and fingerprint identification. With practice, viewers become more attuned to the regularities across a class of stimuli that they are exposed to. Acquiring expertise in visual domains generally requires two abilities: identifying task-relevant features or dimensions, and then determining the most appropriate action to take with that information. Presumably then, experts in a domain possess superior feature localisation and visual search abilities. Across a series of projects, we plan to create and test some training exercises for turning novices into experts. By exposing novices to some visual search tasks we expect them to learn to spot and compare diagnostic features, and therefore learn to allocate their attention optimally. However, before implementing such an intervention, expert-novice differences need to first be established so that we know how much improvement we can expect with our proposed regimes. The field of diagnostic medicine is an informative place to contextualise our investigation of expert-novice differences in visual search. Both fingerprint identification and medical diagnoses involve making decisions with information contained within naturally varying stimuli. Just as fingerprint examiners make errors (Busey & Dror, 2011; Dror, 2017; Dror, Kukucka & Zapf, 2018; Edmond, Tangen, Searston & Dror, 2014), diagnosticians do too (Krupinski, 2010). Yet, diagnosticians also perform remarkably well in many ways relative to novices, just as fingerprint examiners do. Fingerprint experts can tell whether two fingerprints match or not after only a short glance of the two prints (Thompson & Tangen, 2014), and radiologists can detect abnormalities better than chance given roughly the same amount of time (Drew, Evans, Võ, Jacobson, & Wolfe, 2013; Evans et al., 2013). But although the non-analytic abilities of experts in both domains are well-established, what about their analytic abilities? In medicine, those with more experience tend to detect lesions earlier in their search, and cover less area than those without experience (Krupinski, 1996). Expert radiologists are also more accurate, confident and quicker to diagnose skeletal fractures, especially when the anomalies are more ambiguous (Wood et al., 2013). Medical experts therefore posses superior analytic abilities, notably their visual search, but we know little about whether superior performance holds true for fingerprint examination too. Given the similarities between the too domains, fingerprint examiners should outperform novices on a visual search tasks relevant to their domain. ## The Present Study The purpose of the present experiment is to inform a subsequent study where we will test expert-novice differences in visual search using a fingerprint-like task. Both the current experiment and the subsequent one will exclusively use fingerprint stimuli, but we do not believe the findings and conclusions will be theoretically specific to fingerprint identification. That said, we will use fingerprints for two main reasons: (1) fingerprints are novel to almost everyone, apart from a small group of highly trained specialists across the world’s police and security agencies, and (2) there is significant incentive to investigate forensic science practices with greater scientific rigour (for a more detailed explanation of this rationale, see: [https://osf.io/rxe25][1]). We have conceptualised two visual search tasks to assess whether expert fingerprint examiners possess superior visual search in their domain, relative to novices (see below), but we first need to determine optimal task constraints. The first task asks participants to spot points of similarity (find-the-fragment), whereas the second asks them to spot points of difference spot-the-difference; see below for more detail). The present study aims to investigate how novices perform on both of these tasks as a function of fragment size. That is, how large do the fragments need to be in find-the-fragment task, and how large do the differences need to be in spot-the-difference task, for novices to locate them. The broader objective here is to identify the optimal fragment size for both tasks when we subsequently compare novices and experts in another experiment. In this subsequent experiment, we plan to use fragment sizes that novices are able to locate on 50% of trials. We consider this 50% level to be an 'optimal' size because the 50% is an intuitive baseline, and at this base level for novices, experts are not likely to be at ceiling in their performance. ### Participants We will collect data from 48 undergraduate students at the University of Queensland. The goal of the experiment is to find the optimal fragment size across all novices and all fragment types, and we only require enough participants to obtain a stable average. Given that each participant will make 24 judgements in both tasks, a stable average would likely be attained after only testing a few people. We decided to overshoot this number and picked a number divisible by 24 because so we can present all 24 trial order combinations twice (see procedure), and control for any order effects in doing so. ### Exclusion criteria - Participants who are incorrect (that is, they click on the incorrect location) on more than 25% of trials will be excluded. - Participants who make no response on more than 25% of trials will be excluded. - If a participant fails to complete the experiment because of fatigue, illness, or excessive response delays (i.e., longer than the experimental session allows), then their responses will not be recorded. - If a participant is excluded based on these criteria, then another participant will be recruited to take their place. ### Design We have conceptualised two tasks that we will use to compare assess experts and novices in a subsequent experiment, but we first need to determine which fragment size, and alteration size, is optimal for testing these group differences. In the first task (find-the-fragment), we will present participants with a small fragment of fingerprint ridge detail (on the left of the screen) and a larger array of ridge detail (on the right). Participants are asked to spot the smaller fragment within the larger array as quickly as they can. The second task is a spot-the-difference task where we present participants with two identical fingerprint images side-by-side, but the print on the right has a fragment that has been replaced with different fingerprint information (using Photoshop's Content Aware Tool). Participants are asked to click on the area of the fingerprint that is different to the one on the left. Participants will see 24 trials of each task, and on each trial, the size of the altered fragment will increase over time. Participants will compete each task in blocks, the order of which will be counterbalanced. Participants will receive points depending on how well they do in the task to maintain their motivation. We are interested in finding the fragment that novices can spot on 50% of the trials. ### Materials #### Find-the-fragment task The stimuli in this task (in which people need to find the corresponding fragment) will be selected from the pool of 100 fingerprints identical to what we used in an previous experiment. In this earlier study, we explored how experts and novices differed in what they consider to be informative (diagnostic) parts of a fingerprint, and what they consider uninformative (non-diagnostic; for a more complete description see this pre-registration: [https://osf.io/qzf4t][2]. In this earlier study, we obtained judgements from 30 novices and 30 experts (we only had data from 26 of the experts when we began the present experiment) across 100 fingerprint images We overlayed a 50x50 transparent grid to obtain the coordinates of these points. Using these locations as centre-points, we then generated thousands of circular fingerprint ‘fragments,' varying in radius (1x1, 3x3, 5x5, 7x7, 9x9, 11x11, 13x13, 15x15, 17x17, 19x19, 21x21, 23x23, 25x25 grid squares). We obtained every radius size for every coordinate, unless the radius spanned beyond the boundaries of the image. For example, if the coordinate fell close to the edge of the image, we may only have obtained a 1x1 or 3x3 fragment from it because a 5x5 fragment would have extended beyond the edge of the image. Additionally, we obtained four distinct fragment sets depending on who chose the coordinate (experts vs novice) and whether it was considered useful or not (diagnostic vs non-diagnostic). For each participant, we randomly selected six fragments from each of the four fragment sets (including 10 different radii for each, from 1x1 to 19x19 squares). We excluded fragments that were too close to the boundaries of the image because we wanted the fragments that could span up to 19x19 squares. #### Spot-the-difference task The pool of original fingerprints in this task was identical to the pool used in find-the-fragment task. However, the ‘altered’ fingerprints were generated by taking the coordinates from the earlier experiment, deleting a circular fragment (of various radii: 1x1, 3x3, 5x5… 25x25 squares), and replacing the area with the Content Aware Fill tool in Photoshop. The Content Aware Fill simplifies the process of removing objects from an image and runs a program called PatchMatch (Barnes, Shectman, Finkelstein & Goldman, 2009). PatchMatch uses a randomised algorithm to approximate nearest neighbour matches between image patches (small, square regions). It quickly finds correspondences between patches of an image. Initially, the nearest-neighbour is filled with either random offsets or some prior information. Then, an iterative update process is applied to the nearest-neighbour field, in which good patch offsets are propagated to adjacent pixels, followed by random search in the neighbourhood of the best offset found so far. For the spot-the-difference task, we also had four different sets of alterations (expert-diagnostic, expert-nondiagnsotic, novice-diagnostic, novice-nondiagnostic). For every participant, we randomly selected images from each of these four set types. We excluded images if their alterations were too close to the edges of the image because we required the alterations to span up to 19x19 squares. ### Procedure Up to eight participants at a time will conduct the experiment on separate Macbook Air laptops with a screen resolution of 1440x900 pixels at 72dpi. They will complete a block of 24 spot-the-difference trials and then a block of 24 find-the-fragment trials, or vice versa. The experiment has been programmed in LiveCode Community 9.0.1, which is free open source software [http://downloads.livecode.com][3]. Participants will be given an instruction sheet and will listen to a standard set of instructions presented in a video as follows: #### Find-the-fragment In this task, participants will be presented with a 1-square fingerprint fragment on the left on the screen and the original fingerprint image from where the fragment was sourced, on the right. Participants will be asked to locate the fragment within the larger image on the right as quickly as possible by clicking on the area they think it is located. After 7 seconds, the chunk will increase in size to 3x3 squares, and it will keep increasing in radius by two squares every 7 seconds, or until participants correctly locate the fragment. To keep participants motivated, they will receive 1000 points if they locate the fragment within the first 7 seconds, but receive 100 points less for every 7 seconds they spend on the task. If they fail to correctly find the fragment they will move onto the next trial, receiving zero points. If they click on the incorrect location they will also receive zero points. The participant's total score throughout the experiment will be presented in the top left of the screen. @[youtube](https://youtu.be/HUc4bvWNhAc) #### Spot-the-difference A similar procedure was used in the spot-the-difference task. On each trial, an original un-altered fingerprint image will be presented on the left of the screen and an altered version will be presented on the right. Participants will be asked to click on the area of the altered print that is different from the image on the left. At the beginning of the trial, this change will span 1-square but will increase in size (by two squares in radius) every 7 seconds or until the participant spots the difference. To keep participants motivated, they will receive 1000 points if they correctly spot the difference within the first 7 seconds, but will receive 100 points less every 7 seconds thereafter. They will receive zero points if they click on the incorrect location. The participant's total score throughout the experiment will presented in the top left of the screen. @[youtube](https://youtu.be/KTIZi1rhqhg) ### Data collection No data have been collected yet. ### Planned Analysis We will measure the percentage of correct clicks across every participant at every fragment size across every trial, and to plot the distribution of correct clicks across as a function of fragment size. If a participant clicks correctly at a particular size, all larger sizes will be coded as correct too. For example, if a participant clicks correctly at 5x5 squares, they will be coded as having clicked correctly for all larger sizes too (7x7, 9x9... etc.). We will exclude all data points where the participant was incorrect. We are interested in finding the fragment size at which novices correctly locate the fragment, or spot-the-difference, 50% of the time. ### Predictions Although this study is exploratory, we predict that the frequency of correct clicks as a function of fragment size will increase in a sigmoid-like fashion. We predict that participants will perform better on the find-the-fragment task than on the spot-the-difference task because the former task is less complex and likely requires fewer fixations to complete. Based on some quick pilot trials (on ourselves), we predict that participants will be able to find the fragment (on 50% of trials) when the squares are about 11x11 squares in size. We also predict that they will be able to spot the difference (50% of the time) when the altered fingerprint fragment in the spot-the-difference task is 13x13 squares in size. ![Rough Predictions][4] ## Ethics Ethics approval (16-PSYCH-PHD-25-AH) was obtained from the University of Queensland Psychology Ethics Review Committee on 19/5/17. The information sheet is to be given to participants before commencing the experiment and debrief sheet given after completion. These can be found in the Files section. ## References Barnes, C., Shechtman, E., Finkelstein, A., & Goldman, D. B. (2009). *PatchMatch.* ACM Transactions on Graphics, 28(3), 1. https://doi.org/10.1145/1531326.1531330 Busey, T. A., & Dror, I. E. (2011). *Special Abilities and Vulnerabilities in Forensic Expertise*. In The fingerprint sourcebook (pp. 1–23). Retrieved from www.cci-hq.com Drew, T., Evans, K., Võ, M. L.-H., Jacobson, F. L., & Wolfe, J. M. (2013). Informatics in Radiology: *What Can You See in a Single Glance and How Might This Guide Visual Search in Medical Images?* RadioGraphics, 33(1), 263–274. https://doi.org/10.1148/rg.331125023 Dror, I. E. (2017). *Human expert performance in forensic decision making: Seven different sources of bias.* Australian Journal of Forensic Sciences, 49(5), 541–547. https://doi.org/10.1080/00450618.2017.1281348 Dror, I. E., Kukucka, J., Kassin, S. M., & Zapf, P. A. (2018). *When Expert Decision Making Goes Wrong: Consensus, Bias, the Role of Experts, and Accuracy.* Journal of Applied Research in Memory and Cognition. https://doi.org/10.1016/j.jarmac.2018.01.007 Edmond, G., Tangen, J. M., Searston, R. A., & Dror, I. E. (2014). *Contextual bias and cross-contamination in the forensic sciences: The corrosive implications for investigations, plea bargains, trials and appeals*. Law, Probability and Risk, 14(1), 1–25. https://doi.org/10.1093/lpr/mgu018 Evans, K. K., Georgian-Smith, D., Tambouret, R., Birdwell, R. L., & Wolfe, J. M. (2013). *The gist of the abnormal: Above-chance medical decision making in the blink of an eye.* Psychonomic Bulletin and Review, 20(6), 1170–1175. https://doi.org/10.3758/s13423-013-0459-3 Krupinski, E. A. (2010). *Current perspectives in medical image perception.* Attention, Perception & Psychophysics, 72(5), 1205–1217. https://doi.org/10.3758/APP.72.5.1205 Krupinski, E. A. (1996). *Visual scanning patterns of radiologists searching mammograms.* Academic Radiology, 3(2), 137–144. https://doi.org/10.1016/S1076-6332(05)80381-2 Thompson, M. B., & Tangen, J. M. (2014). *The nature of expertise in fingerprint matching: Experts can do a lot with a little.* PLoS ONE, 9(12), 1–23. https://doi.org/10.1371/journal.pone.0114759 Wood, G., Knapp, K. M., Rock, B., Cousens, C., Roobottom, C., & Wilson, M. R. (2013). *Visual expertise in detecting and diagnosing skeletal fractures.* Skeletal Radiology, 42(2), 165–172. https://doi.org/10.1007/s00256-012-1503-5 [1]: https://osf.io/rxe25 [2]: https://osf.io/qzf4t [3]: http://downloads.livecode.com [4]: https://files.osf.io/v1/resources/3g4e7/providers/osfstorage/5bbaf13f0463cb0018cae780?mode=render
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.