Date of Award

5-2022

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Computer Engineering and Sciences

First Advisor

Michael C. King

Second Advisor

Kevin Bowyer

Third Advisor

Vanessa A. Edkins

Fourth Advisor

Philip J. Bernhard

Abstract

A number of recent research studies have shown that face recognition accuracy is meaningfully worse for females than males. Gender classification algorithms also perform worse: one commercial classifier gives a 7% error rate for African-American females vs. 0.5% for Caucasian males. In response to these observations, we consider one primary question: do errors in gender classification lead to errors in facial recognition? We approach this question by focusing on two main areas (1) do gender-misclassified images generate higher similarity scores with different individuals from the false-gender category versus their true-gender category? (2) What is the impact of gender misclassified images on the performance accuracy of the system? We find that (1) for all demographic groups, except for the African American Male, non-mated pairs of subjects with at least one gender-misclassified image have a higher False Match Rate (FMR) with their ground truth gender compared to their erroneously projected gender group. Similarly, on average and across demographics groups, gender-misclassified subjects still have higher similarity scores with subjects of their true gender than those of the falsely classified gender. (3) There was no significant impact on the 1-to-N accuracy when using the open-source algorithm, ArcFace, whereas for the commercial matcher, there seems to be a decline in performance accuracy for misclassified images. To our knowledge, this is the first work to analyze and match scores for gender misclassified images against both the false-gender category and the true-gender category and extend the work from an identification standpoint.

Comments

Copyright held by author

Available for download on Tuesday, May 07, 2024

Share

COinS