Review: Accuracy and Reliability of Forensic Latent Fingerprint Decisions

Emily C. Lennert

Category: patterned evidence

Keywords: fingerprints, latent, reliability, accuracy, error, false positive, false negative

Article to be reviewed:

  1. Ulery, B. T.; Hicklin, R. A.; Buscaglia, J.; Roberts, M. A. “Accuracy and reliability of forensic latent fingerprint decisions.” Proceedings of the National Academy of Sciences 2011, 108 (19), 7733–7738.

Additional references:

  1. Strengthening Forensic Science in the United States: A Path Forward; National Academies Press: Washington, D.C., 2009.

Disclaimer: The opinions expressed in this review are an interpretation of the research presented in the article. These opinions are those of the summation author and do not necessarily represent the position of the University of Central Florida or of the authors of the original article.

Summary: In 2009, the National Research Council of the National Academies2 published a report on the state of forensic science. The National Academies report, among many other recommendations and comments, recommended studies into the accuracy and reliability of latent fingerprint decisions. Latent fingerprints are a common, if not the most common, form of patterned evidence. Latent fingerprints are the fingerprints that are left behind at the scene of a crime, unintentionally by the perpetrator, which are not visible until revealed by developing procedures such as dusting. Latent print examiners analyze latent prints and compare those prints to exemplar prints, which are fingerprints collected from a known person and are of a higher quality than latent prints. Exemplars that are most similar to the latent fingerprint are selected for comparison, either because they were submitted to the crime lab as a known fingerprint from a potential suspect or they were identified by an automated system such as the Automated Fingerprint Identification System (AFIS).

Examiners follow the ACE-V method for fingerprint examination: analysis, comparison, evaluation, and verification by a different examiner. The latent fingerprint is first analyzed to determine if it is suitable for comparison. Then the latent fingerprint is compared and evaluated against each exemplar print, and a decision is made as to whether the fingerprints are of the same source (individualization), a different source (exclusion), or inconclusive. Individualized fingerprints must be verified, and verification can be performed but is not required for exclusion or inconclusive decisions. A second examiner performs a blind verification by comparing the unknown latent print to the known fingerprint, and the second examiner does not know the initial decision prior to conducting the verification.

This study investigated the accuracy of the decisions made by latent fingerprint examiners. The authors stated that the accuracy of latent fingerprint examinations had not yet been studied on such a large scale. The frequency of errors was evaluated. False positive errors occurred in instances of erroneous individualization, or matching of fingerprints that did not originate from the same source. False negatives occurred in instances of exclusion or an inconclusive decision when, in fact, the latent fingerprint and exemplar print did originate from the same source. Examiners knowingly participated in the study, using custom software developed by Noblis, which allowed minimal image manipulation and enhancement. A latent print was presented to the examiner, who then decided if the fingerprint was suitable for analysis. If determined suitable, an exemplar print was then given. The examiner then compared and evaluated the fingerprints, and the decision was recorded. The true decision was known to the software to allow for determination of error.

The study reported a false positive rate of 0.1%, with 6 false positive errors occurring in 4,083 comparisons that resulted in an individualization decision. Five examiners made a single error, and one examiner made two errors. No two examiners made the same erroneous decision. A false negative rate of 7.5% was reported. False negative errors were much more frequent than false positive errors. According to the study, 85% of participants made at least one false negative error. The study states that verification would have detected most, if not all, of the errors. However, the study did not evaluate the accuracy of verification decisions, and exclusion and inconclusive decisions do not require verification, so the false negative errors would only be detected if the laboratory elected to verify exclusions and inconclusive decisions. The examiner skill was also investigated, but no relationship was found between skill level and error rate.

Scientific Highlights:

  • Print comparison decisions are made and verified by the ACE-V method.
  • False positive error rate of 0.1% was observed in this study.
  • False negative error rate of 7.5% was observed in this study.
  • Examiner skill level was not found to have a significant relationship to the occurrence of error.

Relevance: Understanding the frequency of error in fingerprint examination will help the forensic science and legal communities to understand the accuracy and reliability of latent fingerprint evidence.

Potential conclusions:

  • Incorrect exclusion or inconclusive decisions in latent fingerprint examinations are more common than incorrect individualization.
  • Incorrect individualization is infrequent, but may occur.
  • Blind verification of all latent fingerprint decisions can minimize error.
  • Individualization decisions in latent fingerprint examinations are more reliable than decisions of exclusion of inconclusive decisions.