# Tools implemented in bob.bio.face¶

## Summary¶

### Databases¶

#### Bob databases (used for biometric testing)¶

 This package contains the access API and descriptions for the AR face database. The Casia-Face-Africa dataset is composed of 1133 identities from different ethical groups in Nigeria. The MOBIO dataset is a video database containing bimodal data (face/speaker). This package contains the access API and descriptions for the IARPA Janus Benchmark C -- IJB-C database. Database interface that loads a csv definition for replay-mobile The GBU (Good, Bad and Ugly) database consists of parts of the MBGC-V1 image set. This package contains the access API and descriptions for the Labeled Faced in the Wild (LFW) database. The CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months. The MEDS II database was developed by NIST to support and assists their biometrics evaluation program. The MORPH dataset is relatively old, but is getting some traction recently mostly because its richness with respect to sensitive attributes. Collected by USA Army, the Polarimetric Thermal Database contains basically VIS and Thermal face images. This package contains the access API and descriptions for the CASIA NIR-VIS 2.0 Database . Surveillance Camera Face dataset The CAS-PEAL database consists of several ten thousand images of Chinese people (CAS = Chinese Academy of Science).

#### Pytorch databases (used on pytorch)¶

 bob.bio.face.pytorch.datasets.WebFace42M([...]) Pytorch Daset for the WebFace42M dataset mentioned in bob.bio.face.pytorch.datasets.MedsTorchDataset(...) MEDS torch interface bob.bio.face.pytorch.datasets.MorphTorchDataset(...) MORPH torch interface bob.bio.face.pytorch.datasets.MobioTorchDataset(...) bob.bio.face.pytorch.datasets.MSCelebTorchDataset(...) This interface make usage of a CSV file containing gender and RACE annotations available at. bob.bio.face.pytorch.datasets.VGG2TorchDataset(...) VGG2 for torch.

### Deep Learning Extractors¶

#### PyTorch models¶

 bob.bio.face.embeddings.pytorch.afffe_baseline(...) Get the AFFFE pipeline which will crop the face $$224 \times 224$$ use the AFFFE_2021 bob.bio.face.embeddings.pytorch.iresnet34(...) Get the Resnet34 pipeline which will crop the face $$112 \times 112$$ and use the IResnet34 to extract the features bob.bio.face.embeddings.pytorch.iresnet50(...) Get the Resnet50 pipeline which will crop the face $$112 \times 112$$ and use the IResnet50 to extract the features bob.bio.face.embeddings.pytorch.iresnet100(...) Get the Resnet100 pipeline which will crop the face $$112 \times 112$$ and use the IResnet100 to extract the features bob.bio.face.embeddings.pytorch.GhostNet(...) Get the GhostNet pipeline which will crop the face $$112 \times 112$$ and use the GhostNet to extract the features bob.bio.face.embeddings.pytorch.ReXNet(...) Get the ReXNet pipeline which will crop the face $$112 \times 112$$ and use the ReXNet to extract the features bob.bio.face.embeddings.pytorch.HRNet(...[, ...]) Get the HRNet pipeline which will crop the face $$112 \times 112$$ and use the HRNet to extract the features bob.bio.face.embeddings.pytorch.TF_NAS(...) Get the TF_NAS pipeline which will crop the face $$112 \times 112$$ and use the TF-NAS to extract the features bob.bio.face.embeddings.pytorch.ResNet(...) Get the ResNet pipeline which will crop the face $$112 \times 112$$ and use the ResNet to extract the features bob.bio.face.embeddings.pytorch.EfficientNet(...) Get the EfficientNet pipeline which will crop the face $$112 \times 112$$ and use the EfficientNet to extract the features bob.bio.face.embeddings.pytorch.MobileFaceNet(...) Get the MobileFaceNet pipeline which will crop the face $$112 \times 112$$ and use the MobileFaceNet to extract the features bob.bio.face.embeddings.pytorch.ResNeSt(...) Get the ResNeSt pipeline which will crop the face $$112 \times 112$$ and use the ResNeSt to extract the features bob.bio.face.embeddings.pytorch.AttentionNet(...) Get the AttentionNet pipeline which will crop the face $$112 \times 112$$ and use the AttentionNet to extract the features bob.bio.face.embeddings.pytorch.RunnableModel(model) Runnable pytorch model

#### Tensorflow models¶

 bob.bio.face.embeddings.tensorflow.facenet_sanderberg_20170512_110547(...) Get the Facenet pipeline which will crop the face $$160 \times 160$$ and use the FaceNetSanderberg_20170512_110547 to extract the features bob.bio.face.embeddings.tensorflow.resnet50_msceleb_arcface_2021(...) Get the Resnet50 pipeline which will crop the face $$112 \times 112$$ and use the Resnet50_MsCeleb_ArcFace_2021 to extract the features bob.bio.face.embeddings.tensorflow.resnet50_msceleb_arcface_20210521(...) Get the Resnet50 pipeline which will crop the face $$112 \times 112$$ and use the Resnet50_MsCeleb_ArcFace_20210521 to extract the features bob.bio.face.embeddings.tensorflow.resnet50_vgg2_arcface_2021(...) Get the Resnet50 pipeline which will crop the face $$112 \times 112$$ and use the Resnet50_VGG2_ArcFace_2021 to extract the features bob.bio.face.embeddings.tensorflow.mobilenetv2_msceleb_arcface_2021(...) Get the MobileNet pipeline which will crop the face $$112 \times 112$$ and use the MobileNetv2_MsCeleb_ArcFace_2021 to extract the features bob.bio.face.embeddings.tensorflow.inception_resnet_v1_msceleb_centerloss_2018(...) Get the Inception Resnet v1 pipeline which will crop the face $$160 \times 160$$ and use the InceptionResnetv1_MsCeleb_CenterLoss_2018 to extract the features bob.bio.face.embeddings.tensorflow.inception_resnet_v2_msceleb_centerloss_2018(...) Get the Inception Resnet v2 pipeline which will crop the face $$160 \times 160$$ and use the InceptionResnetv2_MsCeleb_CenterLoss_2018 to extract the features bob.bio.face.embeddings.tensorflow.inception_resnet_v1_casia_centerloss_2018(...) Get the Inception Resnet v1 pipeline which will crop the face $$160 \times 160$$ and use the InceptionResnetv1_Casia_CenterLoss_2018 to extract the features bob.bio.face.embeddings.tensorflow.inception_resnet_v2_casia_centerloss_2018(...) Get the Inception Resnet v2 pipeline which will crop the face $$160 \times 160$$ and use the InceptionResnetv2_Casia_CenterLoss_2018 to extract the features

#### MxNET models¶

 bob.bio.face.embeddings.mxnet.arcface_insightFace_lresnet100(...)

#### Caffe models¶

 bob.bio.face.embeddings.opencv.vgg16_oxford_baseline(...) Get the VGG16 pipeline which will crop the face $$224 \times 224$$ use the VGG16_Oxford

### Face Image Annotators¶

 Base class for all face annotators bob.bio.face.annotator.MTCNN([min_size, ...]) MTCNN v1 wrapper for Tensorflow 2. bob.bio.face.annotator.TinyFace([prob_thresh]) TinyFace face detector. Face detector taken from https://github.com/JDAI-CV/FaceX-Zoo Landmark detector taken from https://github.com/JDAI-CV/FaceX-Zoo

### Annotation Tools¶

 A bounding box class storing top, left, height and width of an rectangle. Creates a bounding box from the given parameters, which are, in general, annotations read using bob.bio.base.utils.annotations.read_annotation_file(). Converts BoundingBox to dictionary annotations. Computes the expected eye positions based on the relative coordinates of the bounding box. Validates annotations based on face's minimal size.

### Image Preprocessors¶

 bob.bio.face.preprocessor.Base([dtype, ...]) Performs color space adaptations and data type corrections for the given image. bob.bio.face.preprocessor.FaceCrop(...[, ...]) Crops the face according to the given annotations. Wraps around FaceCrop to enable a dynamical cropper that can handle several annotation types. This face cropper uses a 2 stage strategy to crop and align faces in case annotation_type has a bounding-box. bob.bio.face.preprocessor.TanTriggs(face_cropper) Crops the face (if desired) and applies Tan&Triggs algorithm [TT10] to photometrically enhance the image. Crops the face (if desired) and performs histogram equalization to photometrically enhance the image. bob.bio.face.preprocessor.INormLBP(face_cropper) Performs I-Norm LBP on the given image.

## Databases¶

class bob.bio.face.database.ARFaceDatabase(protocol, annotation_type='eyes-center', fixed_positions=None)

This package contains the access API and descriptions for the AR face database. It only contains the Bob accessor methods to use the DB directly from python, with our certified protocols. The actual raw data for the database should be downloaded from the original URL (though we were not able to contact the corresponding Professor).

Our version of the AR face database contains 3312 images from 136 persons, 76 men and 60 women. We split the database into several protocols that we have designed ourselves. The identities are split up into three groups:

• the ‘world’ group for training your algorithm

• the ‘dev’ group to optimize your algorithm parameters on

• the ‘eval’ group that should only be used to report results

• 'expression': only the probe files with different facial expressions are selected

• 'illumination': only the probe files with different illuminations are selected

• 'occlusion': only the probe files with normal illumination and different accessories (scarf, sunglasses) are selected

• 'occlusion_and_illumination': only the probe files with strong illumination and different accessories (scarf, sunglasses) are selected

• 'all': all files are used as probe

In any case, the images with neutral facial expression, neutral illumination and without accessories are used for enrollment.

Warning

To use this dataset protocol, you need to have the original files of the Mobio dataset. Once you have it downloaded, please run the following command to set the path for Bob

bob config set bob.bio.face.arface.directory [ARFACE PATH]

@article{martinez1998ar,
title={The AR Face Database: CVC Technical Report, 24},
author={Martinez, Aleix and Benavente, Robert},
year={1998}
}

static protocols()[source]
static urls()[source]
class bob.bio.face.database.CBSRNirVis2Database(protocol, annotation_type='eyes-center', fixed_positions=None)

This package contains the access API and descriptions for the CASIA NIR-VIS 2.0 Database <http://www.cbsr.ia.ac.cn/english/NIR-VIS-2.0-Database.html>. The actual raw data for the database should be downloaded from the original URL. This package only contains the Bob accessor methods to use the DB directly from python, with the original protocol of the database.

CASIA NIR-VIS 2.0 database offers pairs of mugshot images and their correspondent NIR photos. The images of this database were collected in four recording sessions: 2007 spring, 2009 summer, 2009 fall and 2010 summer, in which the first session is identical to the CASIA HFB database. It consists of 725 subjects in total. There are [1-22] VIS and [5-50] NIR face images per subject. The eyes positions are also distributed with the images.

@inproceedings{li2013casia,
title={The casia nir-vis 2.0 face database},
author={Li, Stan Z and Yi, Dong and Lei, Zhen and Liao, Shengcai},
booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2013 IEEE Conference on},
pages={348--353},
year={2013},
organization={IEEE}
}


Warning

Use the command below to set the path of the real data:

$bob config set bob.db.cbsr-nir-vis-2.directory [PATH-TO-CBSR-DATA]  Parameters protocol (str) – One of the database protocols. static protocols()[source] static urls()[source] class bob.bio.face.database.CasiaAfricaDatabase(protocol, annotation_type='eyes-center', fixed_positions=None) The Casia-Face-Africa dataset is composed of 1133 identities from different ethical groups in Nigeria. The capturing locations are: • Dabai city in Katsina state • Hotoro in Kano state • Birget in Kano state • Gandun Albasa in Kano state • Sabon Gari inKano state • Kano State School of Technology These locations were strategically selected as they are known to have diverse population of local ethnicities. Warning Only 17 subjects had their images capture in two sessions. Images were captured during daytime and night using three different cameras: • C1: Visual Light Camera • C2: Visual Light Camera • C3: NIR camera This dataset interface implemented the three verificatio protocols: “ID-V-All-Ep1”, “ID-V-All-Ep2”, and “ID-V-All-Ep3” and they are organized as the following: Dev. Set protocol name Cameras (gallery/probe) Identities Gallery Probes ID-V-All-Ep1 C1/C2 1133 2455 2426 ID-V-All-Ep2 C1/C3 1133 2455 1171 ID-V-All-Ep3 C2/C3 1133 2466 1193 Warning Use the command below to set the path of the real data: $ bob config set bob.db.casia-africa.directory [PATH-TO-MEDS-DATA]

@article{jawad2020,
author  =  {Jawad,  Muhammad  and  Yunlong,  Wang  andCaiyong,  Wang  and  Kunbo,  Zhang  and Zhenan, Sun},
title = {CASIA-Face-Africa: A Large-scale African Face Image Database},
journal = {IEEE Transactions on Information Forensics and Security},
pages = {},
ISSN = {},
year = {},
type = {Journal Article}
}


Example

Fetching biometric references:

>>> from bob.bio.face.database import CasiaAfricaDatabase
>>> database.references()


Fetching probes:

>>> from bob.bio.face.database import CasiaAfricaDatabase
>>> database.probes()

Parameters

protocol (str) – One of the database protocols. Options are “ID-V-All-Ep1”, “ID-V-All-Ep2” and “ID-V-All-Ep3”

static protocols()[source]
static urls()[source]
class bob.bio.face.database.CaspealDatabase(protocol, annotation_type='eyes-center', fixed_positions=None)

The CAS-PEAL database consists of several ten thousand images of Chinese people (CAS = Chinese Academy of Science). Overall, there are 1040 identities contained in the database. For these identities, images with different Pose, Expression, Aging and Lighting (PEAL) conditions, as well as accessories, image backgrounds and camera distances are provided.

Included in the database, there are file lists defining identification experiments. All the experiments rely on a gallery that consists of the frontal and frontally illuminated images with neutral expression and no accessories. For each of the variations, probe sets including exactly that variation are available.

The training set consists of a subset of the frontal images (some images are both in the training and in the development set). This also means that there is no training set defined for the pose images. Additionally, the database defines only a development set, but no evaluation set.

This package only contains the Bob accessor methods to use the DB directly from python, with our certified protocols. We have implemented the default face identification protocols 'accessory', 'aging', 'background', 'distance', 'expression' and 'lighting'. We do not provide the 'pose' protocol (yet) since the training set of the CAS-PEAL database does not contain pose images:

@article{gao2007cas,
title={The CAS-PEAL large-scale Chinese face database and baseline evaluations},
author={Gao, Wen and Cao, Bo and Shan, Shiguang and Chen, Xilin and Zhou, Delong and Zhang, Xiaohua and Zhao, Debin},
journal={IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans},
volume={38},
number={1},
pages={149--161},
year={2007},
publisher={IEEE}
}

static protocols()[source]
static urls()[source]
class bob.bio.face.database.FRGCDatabase(protocol, annotation_type='eyes-center', fixed_positions=None)

Face Recognition Grand Test dataset

static protocols()[source]
static urls()[source]
class bob.bio.face.database.FaceBioFile(client_id, path, file_id, **kwargs)
class bob.bio.face.database.GBUDatabase(protocol, annotation_type='eyes-center', fixed_positions=None, original_directory=None, extension='.jpg')

The GBU (Good, Bad and Ugly) database consists of parts of the MBGC-V1 image set. It defines three protocols, i.e., Good, Bad and Ugly for which different model and probe images are used.

Warning

To use this dataset protocol, you need to have the original files of the IJBC datasets. Once you have it downloaded, please run the following command to set the path for Bob

bob config set bob.bio.face.gbu.directory [GBU PATH]


The code below allows you to fetch the galery and probes of the “Good” protocol.

>>> from bob.bio.face.database import GBUDatabase
>>> gbu = GBUDatabase(protocol="Good")
>>>
>>> # Fetching the gallery
>>> references = gbu.references()
>>> # Fetching the probes
>>> probes = gbu.probes()

all_samples(group='dev')[source]

Returns all the samples of the dataset

Parameters

groups (list or None) – List of groups to consider (like ‘dev’ or ‘eval’). If None, will return samples from all the groups.

Returns

samples – List of all the samples of the dataset.

Return type

list

background_model_samples()[source]

Returns bob.pipelines.Sample’s to train a background model

Returns

samples – List of samples for background model training.

Return type

list

groups()[source]
probes(group='dev')[source]

Returns probes to score biometric references

Parameters

group (str) – Limits samples to this group

Returns

probes – List of samples for the creation of biometric probes.

Return type

list

static protocols()[source]
references(group='dev')[source]

Returns references to enroll biometric references

Parameters

group (str, optional) – Limits samples to this group

Returns

references – List of samples for the creation of biometric references.

Return type

list

static urls()[source]
class bob.bio.face.database.IJBCDatabase(protocol, original_directory=None, **kwargs)

This package contains the access API and descriptions for the IARPA Janus Benchmark C – IJB-C database. The actual raw data can be downloaded from the original web page: http://www.nist.gov/programs-projects/face-challenges (note that not everyone might be eligible for downloading the data).

Included in the database, there are list files defining verification as well as closed- and open-set identification protocols. For verification, two different protocols are provided. For the 1:1 protocol, gallery and probe templates are combined using several images and video frames for each subject. Compared gallery and probe templates share the same gender and skin tone – these have been matched to make the comparisons more realistic and difficult.

For closed-set identification, the gallery of the 1:1 protocol is used, while probes stem from either only images, mixed images and video frames, or plain videos. For open-set identification, the same probes are evaluated, but the gallery is split into two parts, either of which is left out to provide unknown probe templates, i.e., probe templates with no matching subject in the gallery. In any case, scores are computed between all (active) gallery templates and all probes.

The IJB-C dataset provides additional evaluation protocols for face detection and clustering, but these are (not yet) part of this interface.

Warning

To use this dataset protocol, you need to have the original files of the IJBC datasets. Once you have it downloaded, please run the following command to set the path for Bob

bob config set bob.bio.face.ijbc.directory [IJBC PATH]


The code below allows you to fetch the galery and probes of the “1:1” protocol.

>>> from bob.bio.face.database import IJBCDatabase
>>> ijbc = IJBCDatabase(protocol="test1")
>>>
>>> # Fetching the gallery
>>> references = ijbc.references()
>>> # Fetching the probes
>>> probes = ijbc.probes()

all_samples(group='dev')[source]

Returns all the samples of the dataset

Parameters

groups (list or None) – List of groups to consider (like ‘dev’ or ‘eval’). If None, will return samples from all the groups.

Returns

samples – List of all the samples of the dataset.

Return type

list

background_model_samples()[source]

Returns bob.pipelines.Sample’s to train a background model

Returns

samples – List of samples for background model training.

Return type

list

groups()[source]
probes(group='dev')[source]

Returns probes to score biometric references

Parameters

group (str) – Limits samples to this group

Returns

probes – List of samples for the creation of biometric probes.

Return type

list

protocols()[source]
references(group='dev')[source]

Returns references to enroll biometric references

Parameters

group (str, optional) – Limits samples to this group

Returns

references – List of samples for the creation of biometric references.

Return type

list

class bob.bio.face.database.LFWDatabase(protocol, annotation_type='eyes-center', image_relative_path='all_images', fixed_positions=None, original_directory=None, extension='.jpg', annotation_directory=None, annotation_issuer='funneled')

This package contains the access API and descriptions for the Labeled Faced in the Wild (LFW) database. It only contains the Bob accessor methods to use the DB directly from python, with our certified protocols. The actual raw data for the database should be downloaded from the original URL (though we were not able to contact the corresponding Professor).

The LFW database provides two different sets (called “views”). The first one, called view1 is used for optimizing meta-parameters of your algorithm. The second one, called view2 is used for benchmarking. This interface supports only the view2 protocol. Please note that in view2 there is only a 'dev' group, but no 'eval'.

Warning

To use this dataset protocol, you need to have the original files of the LFW datasets. Once you have it downloaded, please run the following command to set the path for Bob

bob config set bob.bio.face.lfw.directory [LFW PATH]
bob config set bob.bio.face.lfw.annotation_directory [LFW ANNOTATION_PATH] # for the annotations

>>> from bob.bio.face.database import LFWDatabase
>>> lfw = LFWDatabase(protocol="view2")
>>>
>>> # Fetching the gallery
>>> references = lfw.references()
>>> # Fetching the probes
>>> probes = lfw.probes()

Parameters
• protocol (str) – One of the database protocols. Options are view2

• annotation_type (str) – Type of the annotations used for face crop. Default to eyes-center

• image_relative_path (str) – LFW provides several types image crops. Some with the full image, some with with specific face crop. Use this variable to set which image crop you want. Default to all_images, which means no crop.

• annotation_directory (str) – LFW annotations path. Default to what is set in the variable bob.bio.face.lfw.directory

• original_directory (str) – LFW phisical path. Default to what is set in the variable bob.bio.face.lfw.directory

• annotation_issuer (str) – Type of the annotations. Default to funneled. Possible types funneled, idiap or named

all_samples(group='dev')[source]

Returns all the samples of the dataset

Parameters

groups (list or None) – List of groups to consider (like ‘dev’ or ‘eval’). If None, will return samples from all the groups.

Returns

samples – List of all the samples of the dataset.

Return type

list

background_model_samples()[source]

This function returns the training set for the open-set protocols o1, o2 and o3. It returns the references() and the training samples with known unknowns, which get the subject id “unknown”.

Returns

The training samples, where each sampleset contains all images of one subject. Only the samples of the “unknown” subject are collected from several subjects.

Return type
groups()[source]
probes(group='dev')[source]

Returns probes to score biometric references

Parameters

group (str) – Limits samples to this group

Returns

probes – List of samples for the creation of biometric probes.

Return type

list

static protocols()[source]
references(group='dev')[source]

Returns references to enroll biometric references

Parameters

group (str, optional) – Limits samples to this group

Returns

references – List of samples for the creation of biometric references.

Return type

list

static urls()[source]
class bob.bio.face.database.MEDSDatabase(protocol, annotation_type='eyes-center', fixed_positions=None, dataset_original_directory='', dataset_original_extension='.jpg')

The MEDS II database was developed by NIST to support and assists their biometrics evaluation program. It is composed by 518 identities from both men/women (labeled as M and F) and five different race annotations (Asian, Black, American Indian, Unknown and White) (labeled as A, B, I, U and W.

Unfortunately, the distribution of gender and race is extremely unbalanced as it can be observed in their statistics. Furthermore, only 256 subjects has more than one image sample (obviously it is not possible to do a biometric evaluation with one sample per subject). For this reason, this interface contains a subset of the data, which is composed only by 383 subjects (White and Black men only).

This dataset contains three verification protocols and they are: verification_fold1, verification_fold2 and verification_fold1. Follow below the identities distribution in each set for the for each protocol:

Training set

Dev. Set

Eval. Set

T-References

Z-Probes

verification_fold1

80

80

111

112

verification_fold2

80

80

111

112

verification_fold3

80

80

111

112

Example

Fetching biometric references:

>>> from bob.bio.face.database import MEDSDatabase
>>> database = MEDSDatabase(protocol="verification_fold1")
>>> database.references()


Fetching probes:

>>> from bob.bio.face.database import MEDSDatabase
>>> database = MEDSDatabase(protocol="verification_fold1")
>>> database.probes()


Fetching refererences for T-Norm normalization:

>>> from bob.bio.face.database import MEDSDatabase
>>> database = MEDSDatabase(protocol="verification_fold1")
>>> database.trerefences()


Fetching probes for Z-Norm normalization:

>>> from bob.bio.face.database import MEDSDatabase
>>> database = MEDSDatabase(protocol="verification_fold1")
>>> database.zprobes()


Warning

Use the command below to set the path of the real data:

$bob config set bob.db.meds.directory [PATH-TO-MEDS-DATA]  Parameters protocol (str) – One of the database protocols. Options are verification_fold1, verification_fold2 and verification_fold3 static urls()[source] class bob.bio.face.database.MobioDatabase(protocol, annotation_type='eyes-center', fixed_positions=None, dataset_original_directory='', dataset_original_extension='.png') The MOBIO dataset is a video database containing bimodal data (face/speaker). It is composed by 152 people (split in the two genders male and female), mostly Europeans, split in 5 sessions (few weeks time lapse between sessions). The database was recorded using two types of mobile devices: mobile phones (NOKIA N93i) and laptop computers(standard 2008 MacBook). For face recognition images are used instead of videos. One image was extracted from each video by choosing the video frame after 10 seconds. The eye positions were manually labelled and distributed with the database. Warning To use this dataset protocol, you need to have the original files of the Mobio dataset. Once you have it downloaded, please run the following command to set the path for Bob bob config set bob.db.mobio.directory [MOBIO PATH]  For more information check: @article{McCool_IET_BMT_2013, title = {Session variability modelling for face authentication}, author = {McCool, Chris and Wallace, Roy and McLaren, Mitchell and El Shafey, Laurent and Marcel, S{'{e}}bastien}, month = sep, journal = {IET Biometrics}, volume = {2}, number = {3}, year = {2013}, pages = {117-129}, issn = {2047-4938}, doi = {10.1049/iet-bmt.2012.0059}, }  static protocols()[source] static urls()[source] class bob.bio.face.database.MorphDatabase(protocol, annotation_type='eyes-center', fixed_positions=None, dataset_original_directory='', dataset_original_extension='.JPG') The MORPH dataset is relatively old, but is getting some traction recently mostly because its richness with respect to sensitive attributes. It is composed by 55,000 samples from 13,000 subjects from men and women and five race clusters (called ancestry) and they are the following: African, European, Asian, Hispanic and Others. Figure 8 present some samples from this database. This dataset contains faces from five ethnicities (African, European, Asian, Hispanic, “Other”) and two genders (Male and Female). Furthermore, this interface contains three verification protocols and they are: verification_fold1, verification_fold2 and verification_fold1. Follow below the identities distribution in each set for the for each protocol: Training set Dev. Set Eval. Set T-References Z-Probes verification_fold1 69 66 6738 6742 verification_fold2 69 67 6734 6737 verification_fold3 70 66 6736 6740 Warning Use the command below to set the path of the real data: $ bob config set bob.db.morph.directory [PATH-TO-MORPH-DATA]

Parameters

protocol (str) – One of the database protocols. Options are verification_fold1, verification_fold2 and verification_fold3

static urls()[source]
class bob.bio.face.database.MultipieDatabase(protocol, annotation_type='eyes-center', fixed_positions=None)

The CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months. Subjects were imaged under 15 view points and 19 illumination conditions while displaying a range of facial expressions. In addition, high resolution frontal images were acquired as well. In total, the database contains more than 305 GB of face data.

The data has been recorded over 4 sessions. For each session, the subjects were asked to display a few different expressions. For each of those expressions, a complete set of 30 pictures is captured that includes 15 different view points times 20 different illumination conditions (18 with various flashes, plus 2 pictures with no flash at all).

Warning

To use this dataset protocol, you need to have the original files of the Multipie dataset. Once you have it downloaded, please run the following command to set the path for Bob

bob config set bob.db.multipie.directory [MULTIPIE PATH]


Available expressions:

• Session 1 : neutral, smile

• Session 2 : neutral, surprise, squint

• Session 3 : neutral, smile, disgust

• Session 4 : neutral, neutral, scream.

Camera and flash positioning:

The different view points are obtained by a set of 13 cameras located at head height, spaced at 15° intervals, from the -90° to the 90° angle, plus 2 additional cameras located above the subject to simulate a typical surveillance view. A flash coincides with each camera, and 3 additional flashes are positioned above the subject, for a total of 18 different possible flashes.

Protocols:

Expression protocol

Protocol E

• Only frontal view (camera 05_1); only no-flash (shot 0)

• Enrolled : 1x neutral expression (session 1; recording 1)

• Probes : 4x neutral expression + other expressions (session 2, 3, 4; all recordings)

Pose protocol

Protocol P

• Only neutral expression (recording 1 from each session, + recording 2 from session 4); only no-flash (shot 0)

• Enrolled : 1x frontal view (session 1; camera 05_1)

• Probes : all views from cameras at head height (i.e excluding 08_1 and 19_1), including camera 05_1 from session 2,3,4.

Illumination protocols

N.B : shot 19 is never used in those protocols as it is redundant with shot 0 (both are no-flash).

Protocol M

• Only frontal view (camera 05_1); only neutral expression (recording 1 from each session, + recording 2 from session 4)

• Enrolled : no-flash (session 1; shot 0)

• Probes : no-flash (session 2, 3, 4; shot 0)

Protocol U

• Only frontal view (camera 05_1); only neutral expression (recording 1 from each session, + recording 2 from session 4)

• Enrolled : no-flash (session 1; shot 0)

• Probes : all shots from session 2, 3, 4, including shot 0.

Protocol G

• Only frontal view (camera 05_1); only neutral expression (recording 1 from each session, + recording 2 from session 4)

• Enrolled : all shots (session 1; all shots)

• Probes : all shots from session 2, 3, 4.

static protocols()[source]
static urls()[source]
class bob.bio.face.database.PolaThermalDatabase(protocol, annotation_type='eyes-center', fixed_positions=None)

Collected by USA Army, the Polarimetric Thermal Database contains basically VIS and Thermal face images.

Follow bellow the description of the imager used to capture this device.

The polarimetric LWIR imager used to collect this database was developed by Polaris Sensor Technologies. The imager is based on the division-of-time spinning achromatic retarder (SAR) design that uses a spinning phase-retarder mounted in series with a linear wire-grid polarizer. This system, also referred to as a polarimeter, has a spectral response range of 7.5-11.1, using a Stirling-cooled mercury telluride focal plane array with pixel array dimensions of 640×480. A Fourier modulation technique is applied to the pixel readout, followed by a series expansion and inversion to compute the Stokes images. Data were recorded at 60 frames per second (fps) for this database, using a wide FOV of 10.6°×7.9°. Prior to collecting data for each subject, a two-point non-uniformity correction (NUC) was performed using a Mikron blackbody at 20°C and 40°C, which covers the range of typical facial temperatures (30°C-35°C). Data was recorded on a laptop using custom vendor software.

An array of four Basler Scout series cameras was used to collect the corresponding visible spectrum imagery. Two of the cameras are monochrome (model # scA640-70gm), with pixel array dimensions of 659×494. The other two cameras are color (model # scA640-70gc), with pixel array dimensions of 658×494.

The dataset contains 60 subjects in total. For VIS images (considered only the 87 pixels interpupil distance) there are 4 samples per subject with neutral expression (called baseline condition B) and 12 samples per subject varying the facial expression (called expression E). Such variability was introduced by asking the subject to count orally. In total there are 960 images for this modality. For the thermal images there are 4 types of thermal imagery based on the Stokes parameters ($$S_0$$, $$S_1$$, $$S_2$$ and $$S_3$$) commonly used to represent the polarization state. The thermal imagery is the following:

• $$S_0$$: The conventional thermal image

• $$S_1$$

• $$S_2$$

• DoLP: The degree-of-linear-polarization (DoLP) describes the portion of an electromagnetic wave that is linearly polarized, as defined $$\frac{sqrt(S_{1}^{2} + S_{2}^{2})}{S_0}$$.

Since $$S_3$$ is very small and usually taken to be zero, the authors of the database decided not to provide this part of the data. The same facial expression variability introduced in VIS is introduced for Thermal images. The distance between the subject and the camera is the last source of variability introduced in the thermal images. There are 3 ranges: R1 (2.5m), R2 (5m) and R3 (7.5m). In total there are 11,520 images for this modality and for each subject they are split as the following:

Imagery/Range

R1 (B/E)

R2 (B/E)

R3 (B/E)

$$S_0$$

16 (8/8)

16 (8/8)

16 (8/8)

$$S_1$$

16 (8/8)

16 (8/8)

16 (8/8)

$$S_2$$

16 (8/8)

16 (8/8)

16 (8/8)

DoLP

16 (8/8)

16 (8/8)

16 (8/8)

Warning

Use the command below to set the path of the real data:

\$ bob config set bob.db.pola-thermal.directory [PATH-TO-MEDS-DATA]

Parameters

protocol (str) – One of the database protocols.

static protocols()[source]
static urls()[source]
class bob.bio.face.database.RFWDatabase(protocol, original_directory=None, **kwargs)

Dataset interface for the Racial faces in the wild dataset:

The RFW is a subset of the MS-Celeb 1M dataset, and it’s composed of 44332 images split into 11416 identities. There are four “race” labels in this dataset (African, Asian, Caucasian, and Indian). Furthermore, with the help of https://query.wikidata.org/ we’ve added information about gender and country of birth.

We offer two evaluation protocols. The first one, called “original” is the original protocol from its publication. It contains ~24k comparisons in total. Worth noting that this evaluation protocol has an issue. It considers only comparisons of pairs of images from the same “race”. To close this gap, we’ve created a protocol called “idiap” that extends the original protocol to one where impostors comparisons (or non-mated) is possible. This is closed to a real-world scenario.

Warning

The following identities are assossiated with two races in the original dataset
• m.023915

• m.0z08d8y

• m.0bk56n

• m.04f4wpb

• m.0gc2xf9

• m.08dyjb

• m.05y2fd

• m.0gbz836

• m.01pw5d

• m.0cm83zb

• m.02qmpkk

• m.05xpnv

@inproceedings{wang2019racial,
title={Racial faces in the wild: Reducing racial bias by information maximization adaptation network},
author={Wang, Mei and Deng, Weihong and Hu, Jiani and Tao, Xunqiang and Huang, Yaohai},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={692--702},
year={2019}
}

all_samples(group='dev')[source]

Returns all the samples of the dataset

Parameters

groups (list or None) – List of groups to consider (like ‘dev’ or ‘eval’). If None, will return samples from all the groups.

Returns

samples – List of all the samples of the dataset.

Return type

list

background_model_samples()[source]

Returns bob.pipelines.Sample’s to train a background model

Returns

samples – List of samples for background model training.

Return type

list

groups()[source]
probes(group='dev')[source]

Returns probes to score biometric references

Parameters

group (str) – Limits samples to this group

Returns

probes – List of samples for the creation of biometric probes.

Return type

list

protocols()[source]
references(group='dev')[source]

Returns references to enroll biometric references

Parameters

group (str, optional) – Limits samples to this group

Returns

references – List of samples for the creation of biometric references.

Return type

list

treferences(group='dev', proportion=1.0)[source]
static urls()[source]
zprobes(group='dev', proportion=1.0)[source]
class bob.bio.face.database.ReplayMobileBioDatabase(protocol='grandtest', protocol_definition_path=None, data_path=None, data_extension='.mov', annotations_path=None, annotations_extension='.json', **kwargs)

Database interface that loads a csv definition for replay-mobile

Looks for the protocol definition files (structure of CSV files). If not present, downloads them. Then sets the data and annotation paths from __init__ parameters or from the configuration (bob config command).

Parameters
• protocol_name (str) – The protocol to use. Must be a sub-folder of protocol_definition_path

• protocol_definition_path (str or None) – Specifies a path where to fetch the database definition from. (See bob.extension.download.get_file()) If None: Downloads the file in the path from bob_data_folder config. If None and the config does not exist: Downloads the file in ~/bob_data.

• data_path (str or None) – Overrides the config-defined data location. If None: uses the bob.db.replaymobile.directory config. If None and the config does not exist, set as cwd.

• annotation_path (str or None) – Specifies a path where the annotation files are located. If None: Downloads the files to the path poited by the bob.db.replaymobile.annotation_directory config. If None and the config does not exist: Downloads the file in ~/bob_data.

class bob.bio.face.database.SCFaceDatabase(protocol, annotation_type='eyes-center', fixed_positions=None)

Surveillance Camera Face dataset

SCface is a database of static images of human faces. Images were taken in uncontrolled indoor environment using five video surveillance cameras of various qualities. Database contains 4160 static images (in visible and infrared spectrum) of 130 subjects. Images from different quality cameras mimic the real-world conditions and enable robust face recognition algorithms testing, emphasizing different law enforcement and surveillance use case scenarios.

static protocols()[source]
static urls()[source]
class bob.bio.face.database.VGG2Database(protocol, dataset_original_directory='', dataset_original_extension='.jpg', annotation_type='eyes-center', fixed_positions=None)

The VGG2 Dataset is composed of 9131 people split into two sets. The training set contains 8631 identities, while the test set contains 500 identities.

As metadata, this dataset contains the gender labels “m” and “f” for, respectively, male and female. It also contains the following race labels:

• A: Asian in general (Chinese, Japanese, Filipino, Korean, Polynesian, Indonesian, Samoan, or any other Pacific Islander

• B: A person having origins in any of the black racial groups of Africa

• I: American Indian, Asian Indian, Eskimo, or Alaskan native

• U: Of indeterminable race

• W: Caucasian, Mexican, Puerto Rican, Cuban, Central or South American, or other Spanish culture or origin, Regardless of race

• N: None of the above

Race labels are taken from: MasterEBTSv10.0.809302017_Final.pdf.

This dataset also contains sets for T-Norm and Z-Norm, normalization.

We provide four protocols; vgg2-short, vgg2-full,vgg2-short-with-eval, vgg2-full-with-eval. The vgg2-short and vgg2-full present the sample amount of identities but varies with respect to the number of samples per identity. The vgg2-full preserves the number of samples per identity from the original dataset. On the other hand, the vgg2-short presents 10 samples per identity at the probe and training sets. With that the training set of vgg2-short contains 86’310 samples instead of 3’141’890 samples from vgg2-full. The protocols with the suffix -with-eval, splits the orinal test set into a dev and eval sets containing 250 identities each.

All the landmarks and face crops provided in the original dataset is provided with this inteface.

Warning

To use this dataset protocol, you need to have the original files of the VGG2 dataset. Once you have it downloaded, please run the following command to set the path for Bob

bob config set bob.bio.face.vgg2.directory [VGG2 PATH]
bob config set bob.bio.face.vgg2.extension [VGG2 EXTENSION]


@inproceedings{cao2018vggface2,
title={Vggface2: A dataset for recognising faces across pose and age},
author={Cao, Qiong and Shen, Li and Xie, Weidi and Parkhi, Omkar M and Zisserman, Andrew},
booktitle={2018 13th IEEE international conference on automatic face \& gesture recognition (FG 2018)},
pages={67--74},
year={2018},
organization={IEEE}
}

background_model_samples()[source]

Returns bob.pipelines.Sample’s to train a background model

Returns

samples – List of samples for background model training.

Return type

list

static protocols()[source]
static urls()[source]

## Annotators¶

class bob.bio.face.annotator.Base

Base class for all face annotators

annotate(sample, **kwargs)[source]

Annotates an image and returns annotations in a dictionary. All annotator should return at least the topleft and bottomright coordinates. Some currently known annotation points such as reye and leye are formalized in bob.bio.face.preprocessor.FaceCrop.

Parameters
• sample (numpy.ndarray) – The image should be a Bob format (#Channels, Height, Width) RGB image.

• **kwargs – The extra arguments that may be passed.

annotations(image)[source]

Returns annotations for all faces in the image.

Parameters

image (numpy.ndarray) – An RGB image in Bob format.

Returns

A list of annotations. Annotations are dictionaries that contain the following possible keys: topleft, bottomright, reye, leye

Return type

list

transform(samples, **kwargs)[source]

Annotates an image and returns annotations in a dictionary.

All annotator should add at least the topleft and bottomright coordinates. Some currently known annotation points such as reye and leye are formalized in bob.bio.face.preprocessor.FaceCrop.

Parameters
• sample (Sample) – The image int the sample object should be a Bob format (#Channels, Height, Width) RGB image.

• **kwargs – Extra arguments that may be passed.

class bob.bio.face.annotator.BoundingBox(topleft: tuple, size: = None, **kwargs)

Bases: object

A bounding box class storing top, left, height and width of an rectangle.

property area

The area (height x width) of the bounding box, read access only

property bottom

The bottom position of the bounding box as integral values, read access only

property bottom_f

The bottom position of the bounding box as floating point values, read access only

property bottomright

The bottom right corner of the bounding box as integral values, read access only

property bottomright_f

The bottom right corner of the bounding box as floating point values, read access only

property center

The center of the bounding box, read access only

contains(point)[source]

Returns True if the given point is inside the bounding box :param point: A point as (x, y) tuple :type point: tuple

Returns

True if the point is inside the bounding box

Return type

bool

property height

The height of the bounding box as integral values, read access only

property height_f

The height of the bounding box as floating point values, read access only

is_valid_for(size: tuple) bool[source]

Checks if the bounding box is inside the given image size :param size: The size of the image to testA size as (height, width) tuple

Returns

True if the bounding box is inside the image boundaries

Return type

bool

property left

The left position of the bounding box as integral values, read access only

property left_f

The left position of the bounding box as floating point values, read access only

mirror_x(width: int) [source]

Returns a horizontally mirrored version of this BoundingBox :param width: The width of the image at which this bounding box should be mirrored

Returns

The mirrored version of this bounding box

Return type

bounding_box

overlap(other: bob.bio.face.annotator.BoundingBox) [source]

Returns the overlapping bounding box between this and the given bounding box :param other: The other bounding box to compute the overlap with

Returns

The overlap between this and the other bounding box

Return type

bounding_box

property right

The right position of the bounding box as integral values, read access only

property right_f

The right position of the bounding box as floating point values, read access only

scale(scale: float, centered=False) [source]

Returns a scaled version of this BoundingBox When the centered parameter is set to True, the transformation center will be in the center of this bounding box, otherwise it will be at (0,0) :param scale: The scale with which this bounding box should be shifted :param centered: Should the scaling done with repect to the center of the bounding box?

Returns

The scaled version of this bounding box

Return type

bounding_box

shift(offset: tuple) [source]

Returns a shifted version of this BoundingBox :param offset: The offset with which this bounding box should be shifted

Returns

The shifted version of this bounding box

Return type

bounding_box

similarity(other: bob.bio.face.annotator.BoundingBox) [source]

Returns the Jaccard similarity index between this and the given BoundingBox The Jaccard similarity coefficient between two bounding boxes is defined as their intersection divided by their union. :param other: The other bounding box to compute the overlap with

Returns

sim – The Jaccard similarity index between this and the given BoundingBox

Return type

float

property size

The size of the bounding box as integral values, read access only

property size_f

The size of the bounding box as floating point values, read access only

property top

The top position of the bounding box as integral values, read access only

property top_f

The top position of the bounding box as floating point values, read access only

property topleft

The top-left position of the bounding box as integral values, read access only

property topleft_f

The top-left position of the bounding box as floating point values, read access only

property width

The width of the bounding box as integral values, read access only

property width_f

The width of the bounding box as floating point values, read access only

class bob.bio.face.annotator.FaceX106Landmarks(device=None, use_mtcnn_detector=True, **kwargs)

Landmark detector taken from https://github.com/JDAI-CV/FaceX-Zoo

This one we are using the 106 larnmark detector that was taken from https://github.com/Hsintao/pfld_106_face_landmarks/blob/master/models/mobilev3_pfld.py

Parameters

use_mtcnn_detector (bool) – If set uses the MTCNN face detector as a base for the landmark extractor. If not, it uses the standard face detector of FaceXZoo.

annotate(image, **kwargs)[source]

Annotates an image using mtcnn

Parameters
• image (numpy.array) – An RGB image in Bob format.

• **kwargs – Ignored.

Returns

Annotations contain: (topleft, bottomright, leye, reye, nose, mouthleft, mouthright, quality).

Return type

dict

class bob.bio.face.annotator.FaceXDetector(device=None, one_face_only=True, **kwargs)

Face detector taken from https://github.com/JDAI-CV/FaceX-Zoo

This one we are using the 106 larnmark detector that was taken from https://github.com/Hsintao/pfld_106_face_landmarks/blob/master/models/mobilev3_pfld.py

annotate(image, **kwargs)[source]

Get the inference of the image and process the inference result.

Returns

A numpy array, the shape is N * (x, y, w, h, confidence), N is the number of detection box.

decode(loc, priors, variances)[source]

Decode locations from predictions using priors to undo the encoding we did for offset regression at train time.

Parameters
• (tensor) (priors) – Shape: [num_priors,4]

• (tensor) – Shape: [num_priors,4].

• variances ((list[float]) Variances of priorboxes) –

Return type

decoded bounding box predictions

py_cpu_nms(dets, thresh)[source]

Python version NMS (Non maximum suppression).

Returns

The kept index after NMS.

class bob.bio.face.annotator.MTCNN(min_size=40, factor=0.709, thresholds=(0.6, 0.7, 0.7), **kwargs)

MTCNN v1 wrapper for Tensorflow 2. See https://kpzhang93.github.io/MTCNN_face_detection_alignment/index.html for more details on MTCNN.

factor

Factor is a trade-off between performance and speed.

Type

float

min_size

Minimum face size to be detected.

Type

int

thresholds

Thresholds are a trade-off between false positives and missed detections.

Type

list

annotate(image, **kwargs)[source]

Annotates an image using mtcnn

Parameters
• image (numpy.array) – An RGB image in Bob format.

• **kwargs – Ignored.

Returns

Annotations contain: (topleft, bottomright, leye, reye, nose, mouthleft, mouthright, quality).

Return type

dict

annotations(image)[source]

Detects all faces in the image and returns annotations in bob format.

Parameters

image (numpy.ndarray) – An RGB image in Bob format.

Returns

A list of annotations. Annotations are dictionaries that contain the following keys: topleft, bottomright, reye, leye, nose, mouthright, mouthleft, and quality.

Return type

list

detect(image)[source]

Detects all faces in the image.

Parameters

image (numpy.ndarray) – An RGB image in Bob format.

Returns

A tuple of boxes, probabilities, and landmarks.

Return type

tuple

property mtcnn_fun
class bob.bio.face.annotator.TinyFace(prob_thresh=0.5, **kwargs)

TinyFace face detector. Original Model is ResNet101 from https://github.com/peiyunh/tiny. Please check for details. The model used in this section is the MxNet version from https://github.com/chinakook/hr101_mxnet.

prob_thresh

Thresholds are a trade-off between false positives and missed detections.

Type

float

annotate(image, **kwargs)[source]

Annotates an image using tinyface

Parameters
• image (numpy.array) – An RGB image in Bob format.

• **kwargs – Ignored.

Returns

Annotations with (topleft, bottomright) keys (or None).

Return type

dict

annotations(img)[source]

Detects and annotates all faces in the image.

Parameters

image (numpy.ndarray) – An RGB image in Bob format.

Returns

A list of annotations. Annotations are dictionaries that contain the following keys: topleft, bottomright, reye, leye. (reye and leye are the estimated results, not captured by the model.)

Return type

list

Creates a bounding box from the given parameters, which are, in general, annotations read using bob.bio.base.utils.annotations.read_annotation_file(). Different kinds of annotations are supported, given by the source keyword:

• direct : bounding boxes are directly specified by keyword arguments topleft and bottomright

• eyes : the left and right eyes are specified by keyword arguments leye and reye

• left-profile : the left eye and the mouth are specified by keyword arguments eye and mouth

• right-profile : the right eye and the mouth are specified by keyword arguments eye and mouth

• ellipse : the face ellipse as well as face angle and axis radius is provided by keyword arguments center, angle and axis_radius

If a source is specified, the according keywords must be given as well. Otherwise, the source is estimated from the given keyword parameters if possible.

If ‘topleft’ and ‘bottomright’ are given (i.e., the ‘direct’ source), they are taken as is. Note that the ‘bottomright’ is NOT included in the bounding box. Please assure that the aspect ratio of the bounding box is 6:5 (height : width).

For source ‘ellipse’, the bounding box is computed to capture the whole ellipse, even if it is rotated.

For other sources (i.e., ‘eyes’), the center of the two given positions is computed, and the padding is applied, which is relative to the distance between the two given points. If padding is None (the default) the default_paddings of this source are used instead. These padding is required to keep an aspect ratio of 6:5.

Parameters
• source (str or None) – The type of annotations present in the list of keyword arguments, see above.

• padding ({'top':float, 'bottom':float, 'left':float, 'right':float}) – This padding is added to the center between the given points, to define the top left and bottom right positions in the bounding box; values are relative to the distance between the two given points; ignored for some of the sources

• kwargs (key=value) – Further keyword arguments specifying the annotations.

Returns

bounding_box – The bounding box that was estimated from the given annotations.

Return type

BoundingBox

bob.bio.face.annotator.bounding_box_to_annotations(bbx)[source]

Converts BoundingBox to dictionary annotations.

Parameters

bbx (BoundingBox) – The given bounding box.

Returns

A dictionary with topleft and bottomright keys.

Return type

dict

Computes the expected eye positions based on the relative coordinates of the bounding box.

This function can be used to translate between bounding-box-based image cropping and eye-location-based alignment. The returned eye locations return the average eye locations, no landmark detection is performed.

Parameters:

bounding_boxBoundingBox

The face bounding box.

padding{‘top’:float, ‘bottom’:float, ‘left’:float, ‘right’:float}

The padding that was used for the eyes source in bounding_box_from_annotation(), has a proper default.

Returns:

eyes{‘reye’(rey, rex), ‘leye’(ley, lex)}

A dictionary containing the average left and right eye annotation.

bob.bio.face.annotator.min_face_size_validator(annotations, min_face_size=(32, 32))[source]

Validates annotations based on face’s minimal size.

Parameters
Returns

True, if the face is large enough.

Return type

bool

## Preprocessors¶

class bob.bio.face.preprocessor.Base(dtype=None, color_channel='gray', **kwargs)

Bases: sklearn.base.TransformerMixin, sklearn.base.BaseEstimator

Performs color space adaptations and data type corrections for the given image.

Parameters:

dtypenumpy.dtype or convertible or None

The data type that the resulting image will have.

color_channelone of ('gray', 'red', 'gren', 'blue', 'rgb')

The specific color channel, which should be extracted from the image.

change_color_channel(image)[source]

color_channel(image) -> channel

Returns the channel of the given image, which was selected in the constructor. Currently, gray, red, green and blue channels are supported.

Parameters:

image2D or 3D numpy.ndarray

The image to get the specified channel from.

Returns:

channel2D or 3D numpy.ndarray

The extracted color channel.

property channel
data_type(image)[source]

Converts the given image into the data type specified in the constructor of this class. If no data type was specified, or the image is None, no conversion is performed.

Parameters

image (2D or 3D numpy.ndarray) – The image to convert.

Returns

image – The image converted to the desired data type, if any.

Return type

2D or 3D numpy.ndarray

fit(X, y=None)[source]
transform(images, annotations=None)[source]

Extracts the desired color channel and converts to the desired data type.

Parameters
Returns

image – The image converted converted to the desired color channel and type.

Return type
class bob.bio.face.preprocessor.BoundingBoxAnnotatorCrop(eyes_cropper, annotator, margin=0.5)

This face cropper uses a 2 stage strategy to crop and align faces in case annotation_type has a bounding-box. In the first stage, it crops the face using the {topleft, bottomright} parameters and expands them using a margin factor. In the second stage, it uses the annotator to estimate {leye and reye} to make the crop using bob.bio.face.preprocessor.croppers.FaceEyesNorm. In case the annotator doesn’t work, it returns the cropped face using the bounding-box coordinates.

Warning

cropped_positions must be set with leye, reye, topleft and bottomright positions

Parameters

eyes_cropper (bob.bio.face.preprocessor.croppers.FaceEyesNorm) – This is the cropper that will be used to crop the face using eyes positions

annotatorbob.bio.base.annotator.Annotator

This is the annotator that will be used to detect faces in the cropped images.

transform(X, annotations=None)[source]

Crops the face using the two-stage croppers

Parameters
• X (list(numpy.ndarray)) – List of images to be cropped

• annotations (list(dict)) – Annotations for each image. Each annotation must contain the following keys:

class bob.bio.face.preprocessor.FaceCrop(cropped_image_size, cropped_positions=None, cropper=None, fixed_positions=None, annotator=None, allow_upside_down_normalized_faces=False, **kwargs)

Crops the face according to the given annotations.

This class is designed to perform a geometric normalization of the face based on the eye locations, using bob.bio.face.preprocessor.croppers.FaceEyesNorm. Usually, when executing the crop_face() function, the image and the eye locations have to be specified. There, the given image will be transformed such that the eye locations will be placed at specific locations in the resulting image. These locations, as well as the size of the cropped image, need to be specified in the constructor of this class, as cropped_positions and cropped_image_size.

Some image databases do not provide eye locations, but rather bounding boxes. This is not a problem at all. Simply define the coordinates, where you want your cropped_positions to be in the cropped image, by specifying the same keys in the dictionary that will be given as annotations to the crop_face() function.

Note

These locations can even be outside of the cropped image boundary, i.e., when the crop should be smaller than the annotated bounding boxes.

Sometimes, databases provide pre-cropped faces, where the eyes are located at (almost) the same position in all images. Usually, the cropping does not conform with the cropping that you like (i.e., image resolution is wrong, or too much background information). However, the database does not provide eye locations (since they are almost identical for all images). In that case, you can specify the fixed_positions in the constructor, which will be taken instead of the annotations inside the crop_face() function (in which case the annotations are ignored).

Parameters
• cropped_image_size ((int, int)) – The resolution of the cropped image, in order (HEIGHT,WIDTH); if not given, no face cropping will be performed

• cropped_positions (dict) – The coordinates in the cropped image, where the annotated points should be put to. This parameter is a dictionary with usually two elements, e.g., {'reye':(RIGHT_EYE_Y, RIGHT_EYE_X) , 'leye':(LEFT_EYE_Y, LEFT_EYE_X)}. However, also other parameters, such as {'topleft' : ..., 'bottomright' : ...} are supported, as long as the annotations in the __call__ function are present.

• fixed_positions (dict or None) – If specified, ignore the annotations from the database and use these fixed positions throughout.

• allow_upside_down_normalized_faces (bool, optional) – If False (default), a ValueError is raised when normalized faces are going to be upside down compared to input image. This allows you to catch wrong annotations in your database easily. If you are sure about your input, you can set this flag to True.

• annotator (bob.bio.base.annotator.Annotator) – If provided, the annotator will be used if the required annotations are missing.

• cropper – Pointer to a function that will crops using the annotations

• kwargs – Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.

transform(X, annotations=None)[source]

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is cropped, according to the given annotations (or to fixed_positions, see crop_face()). Finally, the resulting face is converted to the desired data type.

Parameters
• image (2D or 3D numpy.ndarray) – The face image to be processed.

• annotations (dict or None) – The annotations that fit to the given image.

Returns

face – The cropped face.

Return type
class bob.bio.face.preprocessor.FaceCropBoundingBox(final_image_size, margin=0.5, opencv_interpolation=1)

Bases: sklearn.base.TransformerMixin, sklearn.base.BaseEstimator

Crop the face based on Bounding box positions

Parameters
• final_image_size (tuple) – The final size of the image after cropping in case resize=True

• margin (float) – The margin to be added to the bounding box

fit(X, y=None)[source]
transform(X, annotations, resize=True)[source]

Crop the face based on Bounding box positions

Parameters
• X (numpy.ndarray) – The image to be normalized

• annotations (dict) – The annotations of the image. It needs to contain ‘’topleft’’ and ‘’bottomright’’ positions

• resize (bool) – If True, the image will be resized to the final size In this case, margin is not used

class bob.bio.face.preprocessor.FaceEyesNorm(reference_eyes_location, final_image_size, allow_upside_down_normalized_faces=False, annotation_type='eyes-center', opencv_interpolation=1)

Bases: sklearn.base.TransformerMixin, sklearn.base.BaseEstimator

Geometric normalize a face using the eyes positions This function extracts the facial image based on the eye locations (or the location of other fixed point, see note below). ” The geometric normalization is applied such that the eyes are placed to fixed positions in the normalized image. The image is cropped at the same time, so that no unnecessary operations are executed.

There are three types of annotations:
• eyes-center: The eyes are located at the center of the face. In this case, reference_eyes_location expects

a dictionary with two keys: leye and reye.

• left-profile: The eyes are located at the corner of the face. In this case, reference_eyes_location expects

a dictionary with two keys: leye and mouth.

• right-profile: The eyes are located at the corner of the face. In this case, reference_eyes_location expects

a dictionary with two keys: reye and mouth.

Parameters
• reference_eyes_location (dict) – The reference eyes location. It is a dictionary with two keys.

• final_image_size (tuple) – The final size of the image

• allow_upside_down_normalized_faces (bool) – If set to True, the normalized face will be flipped if the eyes are placed upside down.

• annotation_type (str) – The type of annotation. It can be either ‘eyes-center’ or ‘left-profile’ or ‘right-profile’

• opencv_interpolation (int) – The interpolation method to be used by OpenCV for the function cv2.warpAffine

fit(X, y=None)[source]
transform(X, annotations=None)[source]

Geometric normalize a face using the eyes positions

Parameters
• X (numpy.ndarray) – The image to be normalized

• annotations (dict) – The annotations of the image. It needs to contain ‘’reye’’ and ‘’leye’’ positions

Returns

cropped_image – The normalized image

Return type

numpy.ndarray

class bob.bio.face.preprocessor.HistogramEqualization(face_cropper, **kwargs)

Crops the face (if desired) and performs histogram equalization to photometrically enhance the image.

Parameters
• face_cropper (str or bob.bio.face.preprocessor.FaceCrop or bob.bio.face.preprocessor.FaceDetect or None) –

The face image cropper that should be applied to the image. If None is selected, no face cropping is performed. Otherwise, the face cropper might be specified as a registered resource, a configuration file, or an instance of a preprocessor.

Note

The given class needs to contain a crop_face method.

• kwargs – Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.

equalize_histogram(image) equalized[source]

Performs the histogram equalization on the given image.

Parameters

image (2D numpy.ndarray) – The image to berform histogram equalization with. The image will be transformed to type uint8 before computing the histogram.

Returns

equalized – The photometrically enhanced image.

Return type

2D numpy.ndarray (float)

transform(X, annotations=None)[source]

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is eventually cropped using the face_cropper specified in the constructor. Then, the image is photometrically enhanced using histogram equalization. Finally, the resulting face is converted to the desired data type.

Parameters
• X (2D or 3D numpy.ndarray) – The face image to be processed.

• annotations (dict or None) – The annotations that fit to the given image. Might be None, when the face_cropper is None or of type FaceDetect.

Returns

face – The cropped and photometrically enhanced face.

Return type
class bob.bio.face.preprocessor.INormLBP(face_cropper, neighbors=8, radius=2, method='default', **kwargs)

Performs I-Norm LBP on the given image.

The supported LBP methods are the ones available on. https://scikit-image.org/docs/dev/api/skimage.feature.html#skimage.feature.local_binary_pattern

Parameters
transform(X, annotations=None)[source]

__call__(image, annotations = None) -> face

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is eventually cropped using the face_cropper specified in the constructor. Then, the image is photometrically enhanced by extracting LBP features [HRM06]. Finally, the resulting face is converted to the desired data type.

Parameters:

image2D or 3D numpy.ndarray

The face image to be processed.

annotationsdict or None

The annotations that fit to the given image. Might be None, when the face_cropper is None or of type FaceDetect.

Returns:

face

The cropped and photometrically enhanced face.

class bob.bio.face.preprocessor.MultiFaceCrop(croppers_list)

Wraps around FaceCrop to enable a dynamical cropper that can handle several annotation types. Initialization and usage is similar to the FaceCrop, but the main difference here is that one specifies a list of cropped_positions, and optionally a list of associated fixed positions.

For each set of cropped_positions in the list, a new FaceCrop will be instantiated that handles this exact set of annotations. When calling the transform method, the MultiFaceCrop matches each sample to its associated cropper based on the received annotation, then performs the cropping of each subset, and finally gathers the results.

If there is more than one cropper matching with the annotations, the first valid cropper will be taken. In case none of the croppers match with the received annotations, a ValueError is raised.

Parameters

croppers_list (list) – A list of FaceCrop that crops the face

transform(X, annotations=None)[source]

Extracts the desired color channel and converts to the desired data type.

Parameters
Returns

image – The image converted converted to the desired color channel and type.

Return type
bob.bio.face.preprocessor.Scale(target_img_size)

A transformer that scales images. It accepts a list of inputs

Parameters

target_img_size (tuple) – Target image size, specified as a tuple of (H, W)

class bob.bio.face.preprocessor.TanTriggs(face_cropper, gamma=0.2, sigma0=1, sigma1=2, size=5, threshold=10.0, alpha=0.1, **kwargs)

Crops the face (if desired) and applies Tan&Triggs algorithm [TT10] to photometrically enhance the image.

Parameters
• face_cropper (str or bob.bio.face.preprocessor.FaceCrop or bob.bio.face.preprocessor.FaceDetect or None) –

The face image cropper that should be applied to the image. If None is selected, no face cropping is performed. Otherwise, the face cropper might be specified as a registered resource, a configuration file, or an instance of a preprocessor.

Note

The given class needs to contain a crop_face method.

• gamma – Please refer to the [TT10] original paper.

• sigma0 – Please refer to the [TT10] original paper.

• sigma1 – Please refer to the [TT10] original paper.

• size – Please refer to the [TT10] original paper.

• threshold – Please refer to the [TT10] original paper.

• alpha – Please refer to the [TT10] original paper.

• kwargs – Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.

transform(X, annotations=None)[source]

__call__(image, annotations = None) -> face

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is eventually cropped using the face_cropper specified in the constructor. Then, the image is photometrically enhanced using the Tan&Triggs algorithm [TT10]. Finally, the resulting face is converted to the desired data type.

Parameters:

image2D or 3D numpy.ndarray

The face image to be processed.

annotationsdict or None

The annotations that fit to the given image. Might be None, when the face_cropper is None or of type FaceDetect.

Returns:

face

The cropped and photometrically enhanced face.

## Utilities¶

bob.bio.face.utils.lookup_config_from_database(database)[source]

Read configuration values that might be already defined in the database configuration file.

bob.bio.face.utils.cropped_positions_arcface(annotation_type='eyes-center')[source]

Returns the 112 x 112 crop used in iResnet based models The crop follows the following rule:

• In X –> (112/2)-1

• In Y, leye –> 16+(112/2) –> 72

• In Y, reye –> (112/2)-16 –> 40

This will leave 16 pixels between left eye and left border and right eye and right border

For reference, https://github.com/deepinsight/insightface/blob/master/recognition/arcface_mxnet/common/face_align.py contains the cropping code for training the original ArcFace-InsightFace model. Due to this code not being very explicit, we choose to pick our own default cropped positions. They have been tested to provide good evaluation performance on the Mobio dataset.

For sensitive applications, you can use custom cropped position that you optimize for your specific dataset, such as is done in https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/notebooks/50-shades-of-face.ipynb

bob.bio.face.utils.dnn_default_cropping(cropped_image_size, annotation_type)[source]

Computes the default cropped positions for the FaceCropper used with Neural-Net based extractors, proportionally to the target image size

Parameters
• cropped_image_size (tuple) – A tuple (HEIGHT, WIDTH) describing the target size of the cropped image.

• annotation_type (str or list of str) – Type of annotations. Possible values are: bounding-box, eyes-center, ‘left-profile’, ‘right-profile’ and None, or a combination of those as a list

Returns

The dictionary of cropped positions that will be feeded to the FaceCropper, or a list of such dictionaries if annotation_type is a list

Return type

cropped_positions

bob.bio.face.utils.legacy_default_cropping(cropped_image_size, annotation_type)[source]

Computes the default cropped positions for the FaceCropper used with legacy extractors, proportionally to the target image size

Parameters
• cropped_image_size (tuple) – A tuple (HEIGHT, WIDTH) describing the target size of the cropped image.

• annotation_type (str) – Type of annotations. Possible values are: bounding-box, eyes-center, ‘left-profile’, ‘right-profile’ and None, or a combination of those as a list

Returns

The dictionary of cropped positions that will be feeded to the FaceCropper, or a list of such dictionaries if annotation_type is a list

Return type

cropped_positions

Computes the default cropped positions for the FaceCropper used in PAD applications, proportionally to the target image size

Parameters
• cropped_image_size (tuple) – A tuple (HEIGHT, WIDTH) describing the target size of the cropped image.

• annotation_type (str) – Type of annotations. Possible values are: bounding-box, eyes-center and None, or a combination of those as a list

Returns

The dictionary of cropped positions that will be feeded to the FaceCropper, or a list of such dictionaries if annotation_type is a list

Return type

cropped_positions

bob.bio.face.utils.make_cropper(cropped_image_size, cropped_positions, fixed_positions=None, color_channel='rgb', annotator=None, **kwargs)[source]

Solve the face FaceCropper and additionally returns the necessary transform_extra_arguments for wrapping the cropper with a SampleWrapper.

bob.bio.face.utils.embedding_transformer(cropped_image_size, embedding, cropped_positions, fixed_positions=None, color_channel='rgb', annotator=None, **kwargs)[source]

Creates a pipeline composed by and FaceCropper and an Embedding extractor. This transformer is suited for Facenet based architectures

Warning

This will resize images to the requested image_size

bob.bio.face.utils.face_crop_solver(cropped_image_size, cropped_positions=None, color_channel='rgb', fixed_positions=None, annotator=None, dtype='uint8', **kwargs)[source]

Decide which face cropper to use.

bob.bio.face.utils.get_default_cropped_positions(mode, cropped_image_size, annotation_type)[source]

Computes the default cropped positions for the FaceCropper, proportionally to the target image size

Parameters
• mode (str) – Which default cropping to use. Available modes are : legacy (legacy baselines), facenet, arcface, and pad.

• cropped_image_size (tuple) – A tuple (HEIGHT, WIDTH) describing the target size of the cropped image.

• annotation_type (str) – Type of annotations. Possible values are: bounding-box, eyes-center and None, or a combination of those as a list

Returns

The dictionary of cropped positions that will be feeded to the FaceCropper, or a list of such dictionaries if annotation_type is a list

Return type

cropped_positions