Running baseline experiments in OULU_NPU database

This section explains how to run and evaluate the baseline experiments in OULU-NPU database and REPLAY-MOBILE database.

Note

For the experiments discussed in this section, the databases OULU-NPU and REPLAY-MOBILE needs to be downloaded and installed in your system. Please refer to Executing Baseline Algorithms section in the documentation of bob.pad.face package for more details on how to run the face PAD experiments and setup the databases.

Running experiments in OULU_NPU database

Two baselines are reported in the paper [GM19], one with IQM-SVM and one with LBP-SVM. Running the pipelines is composed of three main steps.

Preprocessing the data, training SVM, and running the pipeline, all these can be done with a single command.

Commands to run the baselines for Protocol_1 is shown here. It can be repeated for all other protocols simply by replacing the protocol variable in the launching command.

1. IQM-SVM baseline

The entire pipeline for IQM-SVM on Protocol_1 of OULU database can be launched with the following command

bin/spoof.py \                                              # spoof.py is used to run the preprocessor
oulunpu \                                                   # run for OULU database
iqm-svm \                                                   # IQM-SVM configuration
--groups train dev eval \                                   # groups
--protocol Protocol_1  \                                    # Protocol to use
--allow-missing-files \                                     # allow failed files
--grid idiap \                                              # use grid, only for Idiap users, REMOVE otherwise
--sub-directory <PATH_TO_STORE_IQM_BASELINE_RESULTS>        # define your path here

Similarly, run the scripts for all protocols in the OULU-NPU dataset. To run the experiments with different protocols Replace the Protocol_1 line with different protocols in: ‘Protocol_1’, ‘Protocol_2’, ‘Protocol_3_1’, ‘Protocol_3_2’, ‘Protocol_3_3’, ‘Protocol_3_4’, ‘Protocol_3_5’, ‘Protocol_3_6’, ‘Protocol_4_1’, ‘Protocol_4_2’, ‘Protocol_4_3’, ‘Protocol_4_4’, ‘Protocol_4_5’, ‘Protocol_4_6’.

2. LBP-SVM baseline

The entire pipeline for IQM-SVM on Protocol_1 of OULU database can be launched with the following command

bin/spoof.py \                                              # spoof.py is used to run the preprocessor
oulunpu \                                                   # run for OULU database
glbp-svm \                                                  # LBP-SVM configuration
--groups train dev eval \                                   # groups
--protocol Protocol_1  \                                    # Protocol to use
--allow-missing-files \                                     # allow failed files
--grid idiap \                                              # use grid, only for Idiap users, REMOVE otherwise
--sub-directory <PATH_TO_STORE_LBP_BASELINE_RESULTS>        # define your path here

Similarly, run the scripts for all protocols in the OULU-NPU dataset. To run the experiments with different protocols Replace the Protocol_1 line with different protocols in: ‘Protocol_1’, ‘Protocol_2’, ‘Protocol_3_1’, ‘Protocol_3_2’, ‘Protocol_3_3’, ‘Protocol_3_4’, ‘Protocol_3_5’, ‘Protocol_3_6’, ‘Protocol_4_1’, ‘Protocol_4_2’, ‘Protocol_4_3’, ‘Protocol_4_4’, ‘Protocol_4_5’, ‘Protocol_4_6’.

3. Evaluating results of face PAD Experiments

The scores obtained can be evaluated with the following command.

For LBP baselines

bin/scoring_acer.py -df \
<PATH_TO_STORE_LBP_BASELINE_RESULTS>/Protocol_1/scores/scores-dev \
-ef <PATH_TO_STORE_LBP_BASELINE_RESULTS>/Protocol_1/scores/scores-eval \
-l "LBP-SVM" -s results

For IQM baselines

bin/scoring_acer.py -df \
<PATH_TO_STORE_IQM_BASELINE_RESULTS>/Protocol_1/scores/scores-dev \
-ef <PATH_TO_STORE_IQM_BASELINE_RESULTS>/Protocol_1/scores/scores-eval \
-l "IQM-SVM" -s results

Running experiments in REPLAY-MOBILE database

Only one protocol grandtest is used in the experiments here. Steps to run two baselines are described below.

1. IQM-SVM baseline

The entire pipeline for IQM-SVM on grandtest of REPLAY-MOBILE database can be launched with the following command

bin/spoof.py \                                              # spoof.py is used to run the preprocessor
replay-mobile \                                             # run for Replay-Mobile database
iqm-svm \                                                   # IQM-SVM configuration
--groups train dev eval \                                   # groups
--protocol grandtest  \                                     # Protocol to use
--allow-missing-files \                                     # allow failed files
--grid idiap \                                              # use grid, only for Idiap users, REMOVE otherwise
--sub-directory <PATH_TO_STORE_IQM_BASELINE_RESULTS>        # define your path here

2. LBP-SVM baseline

The entire pipeline for IQM-SVM on grandtest of REPLAY-MOBILE database can be launched with the following command

bin/spoof.py \                                              # spoof.py is used to run the preprocessor
replay-mobile \                                             # run for Replay-Mobile database
glbp-svm \                                                  # LBP-SVM configuration
--groups train dev eval \                                   # groups
--protocol grandtest  \                                     # Protocol to use
--allow-missing-files \                                     # allow failed files
--grid idiap \                                              # use grid, only for Idiap users, REMOVE otherwise
--sub-directory <PATH_TO_STORE_LBP_BASELINE_RESULTS>        # define your path here

3. Evaluating results of face PAD Experiments

The scores obtained can be evaluated with the following command. In this case only HTER is evaluated instead of ACER

For LBP baselines

bob pad metrics \
<PATH_TO_STORE_LBP_BASELINE_RESULTS>/grandtest/scores/scores-dev \
<PATH_TO_STORE_LBP_BASELINE_RESULTS>/grandtest/scores/scores-eval \
-e \

For IQM baselines

bob pad metrics \
<PATH_TO_STORE_IQM_BASELINE_RESULTS>/grandtest/scores/scores-dev \
<PATH_TO_STORE_IQM_BASELINE_RESULTS>/grandtest/scores/scores-eval \
-e \