Files
Abstract
Face biometrics has been gaining traction among the biometrics research community in the recentdecades due to its many advantages an image can be captured in a covert manner, at a distance, and in-the-
wild scenarios; no contact is needed with the sensor, to name a few. Since the inception of convolutional
neural networks, many face recognition (FR) models and its related components have been proposed.
This is possible because of the availability of many large-scale visible band face datasets. However, models
based on visible band data are not reliable in poorly illuminated areas. To address this problem, this
dissertation proposes a synthesis-based approach where visible band face images are synthesized from their
thermal counterparts to facilitate the usage of the existing state of the art visible band-based FR models.
The contributions of this work are three-fold. First, to the best of our knowledge, the world’s largest yet
dataset of its kind, MILAB-VTF(B) dataset has been collected and annotated at UGA by our team to
advance the field of cross-spectral FR.
One of the essential pre-processing steps for many FR systems is geometric normalization, which
involves rotating the face in an image so that the line joining the geometrical eye centers is horizontal. For
the majority of the face matching algorithms, detecting the eye centers is an important module of the face
normalization process. The proposed method improved the eye center detection accuracy by 60% over
the baseline model, and by 14% over training only the StarGAN2 model without the alignment loss.
To address the problem of cross-spectral FR, involution, an operation that inverts the inherence of
convolution is used as the atomic operation to implement a GAN model that includes a generator, a
discriminator, and a style encoder module. Additionally, an identity loss is introduced to preserve the
distinguishable characteristics of individual subjects which improved the face verification area under curve
(AUC) by around 4% over the current benchmark and reduced the equal error rate (EER) by 7%. On the
more challenging MILAB-VTF(B), the AUC is increased by around 16% and 10% over the indoor and
outdoor datasets, respectively.