Pre-requisites and Setting-up the experiments¶
Downloading the dataset¶
The experiments described in this paper are based on 4 makeup datasets.
The first three datasets: YMU, MIW, and MIFS should be downloaded from http://www.antitza.com/makeup-datasets.html
by contacting their owners.
These datasets may be available in different data structures or files. We have provided a script for each dataset that
should help you in converting these datasets as a set of individual samples stored as .hdf5 file. This process will convert
them into compatible formats. These scripts are located in bob.paper.makeup_aim.misc— which you need to run from the corresponding folder.
For each script, the command should be specified as:
$python generate_<db-name>_db.py <original-data-directory> <output-directory>
The formatted dataset will be stored in the output-directory.
The dataset AIM used in this study should be downloaded from Idiap’s server.
For all 4 datasets, you need to set the path to the dataset in the configuration file. Bob supports a configuration file (~/.bob_bio_databases.txt) in your home directory to specify where the
databases are located. Please specify the paths for the database like below (by editing the file manually) for datasets: AIM, YMU, MIW, and MIFS:
$ cat ~/.bob_bio_databases.txt
[<dataset-name-in-caps>_DIRECTORY] = <path-of-dataset-location>
The metadata used for AIM is a part of WMCA dataset which should be downloaded from Idiap.
Downloading the face recognition CNN model¶
Pre-trained face recognition (FR) model of LightCNN-9 can be downloaded from here, or its own website.
The location of this model should be stored in .bobrc file in your $HOME directory in a json (key:value) format as follows:
{
"LIGHTCNN9_MODEL_DIRECTORY": <path-of-the-directory>
}
Only the directory should be specified. Do not include the model name.
Setting up annotation directories¶
You should specify the annotation directory for each dataset in configuration file (~/.bob_bio_databases.txt).
To generate annotations for YMU, MIW, and MIFS datasets, use the script annotate_db.py provided in this package.
The images in YMU and MIW datasets have already been cropped to the face region, and hence, the face detector used in our work
is sometimes unable to localize various landmarks in the face (required for subsequent alignment). Therefore, it is a good idea
to pad the face image before detection of facial landmarks. You should provide this padding width as a parameter to the annotate_db.py script.
The padding is temporary. It does not alter the stored images in dataset. Also, the annotations are modified to eliminate the effect of padding.
The command has following syntax:
$ python bin/annotate_db.py <dataset-directory> <annotation-directory> <padding-width>
Here, the dataset-directory is same as the directory where generated datasets have been stored.
The annotation directory will contain the details of annotations. This path of directory, for each dataset, should be stored in
the configuration file (~/.bob_bio_databases.txt) similar to previous step. The entries should have a following format:
$ cat ~/.bob_bio_databases.txt
[<dataset-name-in-caps>_ANNOTATION_DIRECTORY] = <path-of-annotation-directory>
For the experiments conducted in this work, padding-with was set to 25, 25, and 0 for YMU, MIW, and MIFS, respectively.
You do not need to compute annotations for AIM dataset. Just setup the annotation directory in configuration file. The annotations will be computed and stored when the experiment is executed for the first time. These annotations will be re-used for subsequent runs.