| Last Updated:
Creating DOI. Please wait...
When a fingerprint is located at a crime scene, a human examiner is counted upon to manually compare this print to those stored in a database. Several experiments have now shown that these professional analysts are highly accurate, but not infallible, much like other fields that involve high-stakes decision making. One method to offset mistakes in these safety-critical domains is to distribute these important decisions to groups of raters who independently assess the same information. This redundancy in the system allows it to continue operating effectively even in the face of rare and random errors. Here, we extend this “wisdom of crowds” approach to fingerprint analysis by comparing the performance of individuals to crowds of professional analysts. We replicate the previous findings that individual experts greatly outperform individual novices, particularly in their false positive rate, but they do make mistakes. But when we pool the decisions of small groups of experts by selecting the decision of the majority, then their false positive rate decreases by up to 8% and their false negative rate decreases by up to 12%. Pooling the decisions of novices results in a similar drop in false negatives, but increases their false positive rate by up to 11%. Aggregating people’s judgements by selecting the majority decision performs better than selecting the decision of the most confident or the most experienced rater. Our results show that combining independent judgements from small groups of fingerprint analysts can greatly improve their performance and could prevent these mistakes from entering courts. Our raw data are available in the Data module, and our data visualisation and analysis code is in the Analysis module.