Python API

This section lists all the functionality available in this library allowing to run face PAD experiments.

Database Interfaces

Base classes

class bob.pad.face.database.VideoPadFile(attack_type, client_id, path, file_id=None)

Bases: bob.pad.base.database.PadFile

A simple base class that defines basic properties of File object for the use in face PAD experiments.

property annotations

Reads the annotations For this property to work, you need to set annotation_directory, annotation_extension, and annotation_type attributes of the files when database’s object method is called.

Returns

The annotations as a dictionary.

Return type

dict

check_original_directory_and_extension()[source]
property frame_shape

Returns the size of each frame in this database. This implementation assumes all videos and frames have the same shape. It’s best to override this method in your database implementation and return a constant.

Returns

The (Channels, Height, Width) sizes.

Return type

(int, int, int)

property frames

Returns an iterator of frames in the video. If your database video files need to be loaded in a special way, you need to override this property.

Returns

An iterator returning frames of the video.

Return type

collection.Iterator

Raises

RuntimeError – In your database implementation, the original_directory and original_extension attributes of the files need to be set when database’s object method is called.

load(directory=None, extension='.avi', frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>)[source]

Loads the video file and returns in a bob.bio.video.FrameContainer.

Parameters
  • directory (str, optional) – The directory to load the data from.

  • extension (str, optional) – The extension of the file to load.

  • frame_selector (bob.bio.video.FrameSelector, optional) – Which frames to select.

Returns

The loaded frames inside a frame container.

Return type

bob.bio.video.FrameContainer

property number_of_frames

REPLAY-ATTACK Database

class bob.pad.face.database.replay.ReplayPadFile(f)[source]

Bases: bob.pad.face.database.VideoPadFile

A high level implementation of the File class for the REPLAY-ATTACK database.

property frame_shape

Returns the size of each frame in this database.

Returns

The (#Channels, Height, Width) which is (3, 240, 320).

Return type

(int, int, int)

property annotations

Return annotations as a dictionary of dictionaries.

If the file object has an attribute of annotation_directory, it will read annotations from there instead of loading annotations that are shipped with the database.

Returns

annotations – A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}.Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

Return type

dict

class bob.pad.face.database.replay.ReplayPadDatabase(protocol='grandtest', original_directory=None, original_extension='.mov', annotation_directory=None, **kwargs)

Bases: bob.pad.base.database.PadDatabase

A high level implementation of the Database class for the REPLAY-ATTACK database.

annotations(f)[source]

Return annotations for a given file object f, which is an instance of ReplayPadFile defined in the HLDI of the Replay-Attack DB. The load() method of ReplayPadFile class (see above) returns a video, therefore this method returns bounding-box annotations for each video frame. The annotations are returned as a dictionary of dictionaries.

Parameters

f (ReplayPadFile) – An instance of ReplayPadFile.

Returns

annotations – A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}.Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

Return type

dict

property frame_shape

Returns the size of each frame in this database.

Returns

The (#Channels, Height, Width) which is (3, 240, 320).

Return type

(int, int, int)

frames(padfile)[source]

Yields the frames of the padfile one by one.

Parameters

padfile (ReplayPadFile) – The high-level replay pad file

Yields

numpy.array – A frame of the video. The size is (3, 240, 320).

number_of_frames(padfile)[source]

Returns the number of frames in a video file.

Parameters

padfile (ReplayPadFile) – The high-level pad file

Returns

The number of frames.

Return type

int

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns lists of ReplayPadFile objects, which fulfill the given restrictions.

Parameters
  • groups (str or [str]) – The groups of which the clients should be returned. Usually, groups are one or more elements of (‘train’, ‘dev’, ‘eval’)

  • protocol (str) – The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

  • purposes (str or [str]) – The purposes for which File objects should be retrieved. Usually it is either ‘real’ or ‘attack’.

  • model_ids – This parameter is not supported in PAD databases yet

  • **kwargs

Returns

files – A list of ReplayPadFile objects.

Return type

[ReplayPadFile]

property original_directory

REPLAY-MOBILE Database

class bob.pad.face.database.replay_mobile.ReplayMobilePadFile(f)[source]

Bases: bob.pad.face.database.VideoPadFile

A high level implementation of the File class for the Replay-Mobile database.

load(directory=None, extension='.mov', frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>)[source]

Overridden version of the load method defined in the VideoPadFile.

Parameters
  • directory (str) – String containing the path to the Replay-Mobile database.

  • extension (str) – Extension of the video files in the Replay-Mobile database.

  • frame_selector (bob.bio.video.FrameSelector) – The frame selector to use.

Returns

video_data – Video data stored in the FrameContainer, see bob.bio.video.utils.FrameContainer for further details.

Return type

bob.bio.video.FrameContainer

property annotations

Reads the annotations For this property to work, you need to set annotation_directory, annotation_extension, and annotation_type attributes of the files when database’s object method is called.

Returns

The annotations as a dictionary.

Return type

dict

property frames

Returns an iterator of frames in the video. If your database video files need to be loaded in a special way, you need to override this property.

Returns

An iterator returning frames of the video.

Return type

collection.Iterator

Raises

RuntimeError – In your database implementation, the original_directory and original_extension attributes of the files need to be set when database’s object method is called.

property number_of_frames
property frame_shape

Returns the size of each frame in this database. This implementation assumes all videos and frames have the same shape. It’s best to override this method in your database implementation and return a constant.

Returns

The (Channels, Height, Width) sizes.

Return type

(int, int, int)

class bob.pad.face.database.replay_mobile.ReplayMobilePadDatabase(protocol='grandtest', original_directory=None, original_extension='.mov', annotation_directory=None, annotation_extension='.json', annotation_type='json', **kwargs)

Bases: bob.pad.base.database.PadDatabase

A high level implementation of the Database class for the Replay-Mobile database.

annotations(f)[source]

Return annotations for a given file object f, which is an instance of ReplayMobilePadFile defined in the HLDI of the Replay-Mobile DB. The load() method of ReplayMobilePadFile class (see above) returns a video, therefore this method returns bounding-box annotations for each video frame. The annotations are returned as dictionary of dictionaries.

If self.annotation_directory is not None, it will read the annotations from there.

Parameters

f (ReplayMobilePadFile) – An instance of ReplayMobilePadFile defined above.

Returns

annotations – A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col),'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

Return type

dict

property frame_shape

Returns the size of each frame in this database.

Returns

The (#Channels, Height, Width) which is (3, 1280, 720).

Return type

(int, int, int)

frames(padfile)[source]

Yields the frames of the padfile one by one.

Parameters

padfile (ReplayMobilePadFile) – The high-level replay pad file

Yields

numpy.array – A frame of the video. The size is (3, 1280, 720).

number_of_frames(padfile)[source]

Returns the number of frames in a video file.

Parameters

padfile (ReplayPadFile) – The high-level pad file

Returns

The number of frames.

Return type

int

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns lists of ReplayMobilePadFile objects, which fulfill the given restrictions.

Parameters
  • groups (str) – OR a list of strings. The groups of which the clients should be returned. Usually, groups are one or more elements of (‘train’, ‘dev’, ‘eval’)

  • protocol (str) – The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

  • purposes (str) – OR a list of strings. The purposes for which File objects should be retrieved. Usually it is either ‘real’ or ‘attack’.

  • model_ids – This parameter is not supported in PAD databases yet

  • **kwargs

Returns

files – A list of ReplayMobilePadFile objects.

Return type

[ReplayMobilePadFile]

property original_directory

MSU MFSD Database

class bob.pad.face.database.msu_mfsd.MsuMfsdPadFile(f)[source]

Bases: bob.pad.face.database.VideoPadFile

A high level implementation of the File class for the MSU MFSD database.

load(directory=None, extension=None, frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>)[source]

Overridden version of the load method defined in the VideoPadFile.

Parameters:

directorystr

String containing the path to the MSU MFSD database. Default: None

extensionstr

Extension of the video files in the MSU MFSD database. Note: extension value is not used in the code of this method. Default: None

frame_selectorFrameSelector

The frame selector to use.

Returns:

video_dataFrameContainer

Video data stored in the FrameContainer, see bob.bio.video.utils.FrameContainer for further details.

class bob.pad.face.database.msu_mfsd.MsuMfsdPadDatabase(protocol='grandtest', original_directory=None, original_extension=None, annotation_directory=None, annotation_extension='.json', annotation_type='json', **kwargs)

Bases: bob.pad.base.database.PadDatabase

A high level implementation of the Database class for the MSU MFSD database.

annotations(f)[source]

Return annotations for a given file object f, which is an instance of MsuMfsdPadFile defined in the HLDI of the MSU MFSD DB. The load() method of MsuMfsdPadFile class (see above) returns a video, therefore this method returns bounding-box annotations for each video frame. The annotations are returned as dictionary of dictionaries.

Parameters:

fobject

An instance of MsuMfsdPadFile defined above.

Returns:

annotationsdict

A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns lists of MsuMfsdPadFile objects, which fulfill the given restrictions.

Keyword parameters:

groupsstr

OR a list of strings. The groups of which the clients should be returned. Usually, groups are one or more elements of (‘train’, ‘dev’, ‘eval’)

protocolstr

The protocol for which the clients should be retrieved. Note: this argument is not used in the code, because objects method of the low-level BD interface of the MSU MFSD doesn’t have protocol argument.

purposesstr

OR a list of strings. The purposes for which File objects should be retrieved. Usually it is either ‘real’ or ‘attack’.

model_ids

This parameter is not supported in PAD databases yet.

Returns:

files[MsuMfsdPadFile]

A list of MsuMfsdPadFile objects.

property original_directory

Aggregated Database

class bob.pad.face.database.aggregated_db.AggregatedDbPadFile(f)[source]

Bases: bob.pad.face.database.VideoPadFile

A high level implementation of the File class for the Aggregated Database uniting 4 databases: REPLAY-ATTACK, REPLAY-MOBILE, MSU MFSD and Mobio.

encode_file_id(f, n=2000)[source]

Return a modified version of the f.id ensuring uniqueness of the ids across all databases.

Parameters:

fobject

An instance of the File class defined in the low level db interface of Replay-Attack, or Replay-Mobile, or MSU MFSD, or Mobio database, respectively: in the bob.db.replay.models.py file or in the bob.db.replaymobile.models.py file or in the bob.db.msu_mfsd_mod.models.py file or in the bob.db.mobio.models.py file.

nint

An offset to be added to the file id for different databases is defined as follows: offset = k*n, where k is the database number, k = 0,1,2 in our case. Default: 2000.

Returns:

file_idint

A modified version of the file id, which is now unigue accross all databases.

encode_file_path(f)[source]

Append the name of the database to the end of the file path separated with “_”.

Parameters:

fobject

An instance of the File class defined in the low level db interface of Replay-Attack, or Replay-Mobile, or MSU MFSD, or Mobio database, respectively: in the bob.db.replay.models.py file or in the bob.db.replaymobile.models.py file or in the bob.db.msu_mfsd_mod.models.py file or in the bob.db.mobio.models.py file.

Returns:

file_pathstr

Modified path to the file, with database name appended to the end separated with “_”.

load(directory=None, extension='.mov', frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>)[source]

Overridden version of the load method defined in the VideoPadFile.

Parameters:

directorystr

String containing the paths to all databases used in this aggregated database. The paths are separated with a space.

extensionstr

Extension of the video files in the REPLAY-ATTACK and REPLAY-MOBILE databases. The extension of files in MSU MFSD is not taken into account in the HighLevel DB Interface of MSU MFSD. Default: ‘.mov’.

Returns:

video_dataFrameContainer

Video data stored in the FrameContainer, see bob.bio.video.utils.FrameContainer for further details.

class bob.pad.face.database.aggregated_db.AggregatedDbPadDatabase(protocol='grandtest', original_directory=None, original_extension=None, **kwargs)

Bases: bob.pad.base.database.PadDatabase

A high level implementation of the Database class for the Aggregated Database uniting 3 databases: REPLAY-ATTACK, REPLAY-MOBILE and MSU MFSD. Currently this database supports 5 protocols, which are listed in the available_protocols argument of this class.

Available protocols are:

  1. “grandtest” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD.

  2. “photo-photo-video” - this protocol is used to test the system on unseen types of attacks. In this case the attacks are splitted as follows: ‘train’ set - only photo attacks are used for training, ‘dev’ set - only photo attacks are used for threshold tuning, ‘eval’ set - only video attacks are used in final evaluation. In this case the final performance is estimated on previously unseen video attacks.

  3. “video-video-photo” - this protocol is used to test the system on unseen types of attacks. In this case the attacks are splitted as follows: ‘train’ set - only video attacks are used for training, ‘dev’ set - only video attacks are used for threshold tuning, ‘eval’ set - only photo attacks are used in final evaluation. In this case the final performance is estimated on previously unseen photo attacks.

  4. “grandtest-mobio” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD plus some additional data from MOBIO dataset is used in the training set.

  5. “grandtest-train-eval” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD. Only two gropus ‘train’ and ‘eval’ are available in this protocol. The ‘dev’ set is concatenated to the training data. When requesting ‘dev’ set, the data of the ‘eval’ set is returned.

  6. “grandtest-train-eval-<num_train_samples>” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD. Only two gropus ‘train’ and ‘eval’ are available in this protocol. The ‘dev’ set is concatenated to the training data. When requesting ‘dev’ set, the data of the ‘eval’ set is returned. MOREOVER, in this protocol you can specify the number of training samples <num_train_samples>, which will be uniformly selected for each database (Replay-Attack, Replay-Mobile, MSU MFSD) used in the Aggregated DB. For example, in the protocol “grandtest-train-eval-5”, 5 training samples will be selected for Replay-Attack, 5 for Replay-Mobile, and 5 for MSU MFSD. The total number of training samples is 15 in this case.

annotations(f)[source]

Return annotations for a given file object f, which is an instance of AggregatedDbPadFile defined in the HLDI of the Aggregated DB. The load() method of AggregatedDbPadFile class (see above) returns a video, therefore this method returns bounding-box annotations for each video frame. The annotations are returned as dictionary of dictionaries.

Parameters:

fobject

An instance of AggregatedDbPadFile defined above.

Returns:

annotationsdict

A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

get_files_given_groups(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns 4 lists of files for Raplay-Attack, Replay-Mobile, MSU MFSD and MOBIO databases, which fulfill the given restrictions. This function for the groups parameter accepts a single string OR a list of strings with multiple groups. Group names are low level, see low_level_group_names argument of the class for available options.

Keyword parameters:

groupsstr

OR a list of strings. The groups of which the clients should be returned. Usually, groups are one or more elements of (‘train’, ‘devel’, ‘test’).

protocolstr

The protocol for which the clients should be retrieved. Available options are defined in the available_protocols argument of the class. So far the following protocols are available:

  1. “grandtest” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD.

  2. “photo-photo-video” - this protocol is used to test the system on unseen types of attacks. In this case the attacks are splitted as follows: ‘train’ set - only photo attacks are used for training, ‘dev’ set - only photo attacks are used for threshold tuning, ‘eval’ set - only video attacks are used in final evaluation. In this case the final performance is estimated on previously unseen video attacks.

  1. “video-video-photo” - this protocol is used to test the system on

    unseen types of attacks. In this case the attacks are splitted as follows: ‘train’ set - only video attacks are used for training, ‘dev’ set - only video attacks are used for threshold tuning, ‘eval’ set - only photo attacks are used in final evaluation. In this case the final performance is estimated on previously unseen photo attacks.

  1. “grandtest-mobio” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD plus some additional data from MOBIO dataset is used in the training set.

  2. “grandtest-train-eval” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD. Only two gropus ‘train’ and ‘test’ are available in this protocol. The ‘devel’ set is concatenated to the training data. When requesting ‘devel’ set, the data of the ‘test’ set is returned.

  3. “grandtest-train-eval-<num_train_samples>” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD. Only two gropus ‘train’ and ‘eval’ are available in this protocol. The ‘dev’ set is concatenated to the training data. When requesting ‘dev’ set, the data of the ‘eval’ set is returned. MOREOVER, in this protocol you can specify the number of training samples <num_train_samples>, which will be uniformly selected for each database (Replay-Attack, Replay-Mobile, MSU MFSD) used in the Aggregated DB. For example, in the protocol “grandtest-train-eval-5”, 5 training samples will be selected for Replay-Attack, 5 for Replay-Mobile, and 5 for MSU MFSD. The total number of training samples is 15 in this case.

purposesstr

OR a list of strings. The purposes for which File objects should be retrieved. Usually it is either ‘real’ or ‘attack’.

model_ids

This parameter is not supported in PAD databases yet

Returns:

replay_files[File]

A list of files corresponding to Replay-Attack database.

replaymobile_files[File]

A list of files corresponding to Replay-Mobile database.

msu_mfsd_files[File]

A list of files corresponding to MSU MFSD database.

mobio_files[File]

A list of files corresponding to MOBIO database or an empty list.

get_files_given_single_group(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns 4 lists of files for Raplay-Attack, Replay-Mobile, MSU MFSD and MOBIO databases, which fulfill the given restrictions. This function for the groups parameter accepts a single string ONLY, which determines the low level name of the group, see low_level_group_names argument of this class for available options.

Parameters:

groupsstr

The group of which the clients should be returned. One element of (‘train’, ‘devel’, ‘test’).

protocolstr

The protocol for which the clients should be retrieved. Available options are defined in the available_protocols argument of the class. So far the following protocols are available:

  1. “grandtest” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD.

  2. “photo-photo-video” - this protocol is used to test the system on unseen types of attacks. In this case the attacks are splitted as follows: ‘train’ set - only photo attacks are used for training, ‘dev’ set - only photo attacks are used for threshold tuning, ‘eval’ set - only video attacks are used in final evaluation. In this case the final performance is estimated on previously unseen video attacks.

  1. “video-video-photo” - this protocol is used to test the system on

    unseen types of attacks. In this case the attacks are splitted as follows: ‘train’ set - only video attacks are used for training, ‘dev’ set - only video attacks are used for threshold tuning, ‘eval’ set - only photo attacks are used in final evaluation. In this case the final performance is estimated on previously unseen photo attacks.

  1. “grandtest-mobio” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD plus some additional data from MOBIO dataset is used in the training set.

  2. “grandtest-train-eval” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD. Only two gropus ‘train’ and ‘test’ are available in this protocol. The ‘devel’ set is concatenated to the training data. When requesting ‘devel’ set, the data of the ‘test’ set is returned.

  3. “grandtest-train-eval-<num_train_samples>” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD. Only two gropus ‘train’ and ‘eval’ are available in this protocol. The ‘dev’ set is concatenated to the training data. When requesting ‘dev’ set, the data of the ‘eval’ set is returned. MOREOVER, in this protocol you can specify the number of training samples <num_train_samples>, which will be uniformly selected for each database (Replay-Attack, Replay-Mobile, MSU MFSD) used in the Aggregated DB. For example, in the protocol “grandtest-train-eval-5”, 5 training samples will be selected for Replay-Attack, 5 for Replay-Mobile, and 5 for MSU MFSD. The total number of training samples is 15 in this case.

purposesstr

OR a list of strings. The purposes for which File objects should be retrieved. Usually it is either ‘real’ or ‘attack’.

model_ids

This parameter is not supported in PAD databases yet

Returns:

replay_files[File]

A list of files corresponding to Replay-Attack database.

replaymobile_files[File]

A list of files corresponding to Replay-Mobile database.

msu_mfsd_files[File]

A list of files corresponding to MSU MFSD database.

mobio_files[File]

A list of files corresponding to MOBIO database or an empty list.

get_mobio_files_given_single_group(groups=None, purposes=None)[source]

Get a list of files for the MOBIO database. All files are bona-fide samples and used only for training. Thus, a non-empty list is returned only when groups=’train’ and purposes=’real’. Only one file per client is selected. The files collected in Idiap are excluded from training set to make sure identities in ‘train’ set don’t overlap with ‘devel’ and ‘test’ sets.

Parameters:

groupsstr

The group of which the clients should be returned. One element of (‘train’, ‘devel’, ‘test’).

purposesstr

OR a list of strings. The purposes for which File objects should be retrieved. Usually it is either ‘real’ or ‘attack’.

Returns:

mobio_files[File]

A list of files, as defined in the low level interface of the MOBIO database.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of AggregatedDbPadFile objects, which fulfill the given restrictions.

Keyword parameters:

groupsstr

OR a list of strings. The groups of which the clients should be returned. Usually, groups are one or more elements of (‘train’, ‘dev’, ‘eval’)

protocolstr

The protocol for which the clients should be retrieved. Available options are defined in the available_protocols argument of the class. So far the following protocols are available:

  1. “grandtest” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD.

  2. “photo-photo-video” - this protocol is used to test the system on unseen types of attacks. In this case the attacks are splitted as follows: ‘train’ set - only photo attacks are used for training, ‘dev’ set - only photo attacks are used for threshold tuning, ‘eval’ set - only video attacks are used in final evaluation. In this case the final performance is estimated on previously unseen video attacks.

  1. “video-video-photo” - this protocol is used to test the system on

    unseen types of attacks. In this case the attacks are splitted as follows: ‘train’ set - only video attacks are used for training, ‘dev’ set - only video attacks are used for threshold tuning, ‘eval’ set - only photo attacks are used in final evaluation. In this case the final performance is estimated on previously unseen photo attacks.

  1. “grandtest-mobio” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD plus some additional data from MOBIO dataset is used in the training set.

  2. “grandtest-train-eval” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD. Only two gropus ‘train’ and ‘eval’ are available in this protocol. The ‘dev’ set is concatenated to the training data. When requesting ‘dev’ set, the data of the ‘eval’ set is returned.

  3. “grandtest-train-eval-<num_train_samples>” - this protocol is using all the data available in the databases Replay-Attack, Replay-Mobile, MSU MFSD. Only two gropus ‘train’ and ‘eval’ are available in this protocol. The ‘dev’ set is concatenated to the training data. When requesting ‘dev’ set, the data of the ‘eval’ set is returned. MOREOVER, in this protocol you can specify the number of training samples <num_train_samples>, which will be uniformly selected for each database (Replay-Attack, Replay-Mobile, MSU MFSD) used in the Aggregated DB. For example, in the protocol “grandtest-train-eval-5”, 5 training samples will be selected for Replay-Attack, 5 for Replay-Mobile, and 5 for MSU MFSD. The total number of training samples is 15 in this case.

purposesstr

OR a list of strings. The purposes for which File objects should be retrieved. Usually it is either ‘real’ or ‘attack’.

model_ids

This parameter is not supported in PAD databases yet

Returns:

files[AggregatedDbPadFile]

A list of AggregatedDbPadFile objects.

uniform_select_list_elements(data, n_samples)[source]

Uniformly select N elements from the input data list.

Parameters:

data[]

Input list to select elements from.

n_samplesint

The number of samples to be selected uniformly from the input list.

Returns:

selected_data[]

Selected subset of elements.

MIFS Database

class bob.pad.face.database.mifs.MIFSPadFile(client_id, path, attack_type=None, file_id=None)[source]

Bases: bob.pad.face.database.VideoPadFile

A high level implementation of the File class for the MIFS database.

load(directory=None, extension=None, frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>)[source]

Overridden version of the load method defined in the VideoPadFile.

Parameters:

directorystr

String containing the path to the MIFS database. Default: None

extensionstr

Extension of the video files in the MIFS database. Default: None

frame_selectorFrameSelector

The frame selector to use.

Returns:

video_dataFrameContainer

Video data stored in the FrameContainer, see bob.bio.video.utils.FrameContainer for further details.

class bob.pad.face.database.mifs.MIFSPadDatabase(protocol='grandtest', original_directory='[YOUR_MIFS_DATABASE_DIRECTORY]', original_extension='.jpg', **kwargs)

Bases: bob.pad.base.database.FileListPadDatabase

A high level implementation of the Database class for the MIFS database.

annotations(f)[source]

Return annotations for a given file object f, which is an instance of MIFSPadFile.

Parameters:

fobject

An instance of MIFSPadFile defined above.

Returns:

annotationsdict

A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

Pre-processors

class bob.pad.face.preprocessor.FaceCropAlign(face_size, rgb_output_flag, use_face_alignment, alignment_type='default', max_image_size=None, face_detection_method=None, min_face_size=None, normalization_function=None, normalization_function_kwargs=None)

Bases: bob.bio.base.preprocessor.Preprocessor

This function is designed to crop / size-normalize / align face in the input image.

The size of the output face is 3 x face_size x face_size pixels, if rgb_output_flag = True, or face_size x face_size if rgb_output_flag = False.

The face can also be aligned using positions of the eyes, only when use_face_alignment = True and face_detection_method is not None.

Both input annotations, and automatically determined are supported.

If face_detection_method is not None, the annotations returned by face detector will be used in the cropping. Currently supported face detectors are listed in supported_face_detection_method argument of this class.

If face_detection_method is None (Default), the input annotations are used for cropping.

A few quality checks are supported in this function. The quality checks are controlled by these arguments: max_image_size, min_face_size. More details below. Note: max_image_size is only supported when face_detection_method is not None.

Parameters:

face_sizeint

The size of the face after normalization.

rgb_output_flagbool

Return RGB cropped face if True, otherwise a gray-scale image is returned.

use_face_alignmentbool

If set to True the face will be aligned aligned, using the facial landmarks detected locally. Works only when face_detection_method is not None.

alignment_typestr

Specifies the alignment type to use if use_face_alignment is set to True Two methods are currently implemented: default which would do alignment by making eyes horizontally lightcnn which aligns the face such that eye center are mouth centers are aligned to predefined positions. This option overrides the face size option as the output required is always 128x128. This is suitable for use with LightCNN model.

max_image_sizeint

The maximum size of the image to be processed. max_image_size is only supported when face_detection_method is not None. Default: None.

face_detection_methodstr

A package to be used for face detection and landmark detection. Options supported by this class: “dlib” and “mtcnn”, which are listed in self.supported_face_detection_method argument. Default: None.

min_face_sizeint

The minimal size of the face in pixels to be processed. Default: None.

normalization_functionfunction

Function to be applied to the input image before cropping and normalization. For example, type-casting to uint8 format and data normalization, using facial region only (annotations). The expected signature of the function: normalization_function(image, annotations, **kwargs).

normalization_function_kwargsdict

Key-word arguments for the normalization_function.

class bob.pad.face.preprocessor.FrameDifference(number_of_frames=None, min_face_size=50, **kwargs)

Bases: bob.bio.base.preprocessor.Preprocessor

This class is designed to compute frame differences for both facial and background regions. The constraint of minimal size of the face can be applied to input video selecting only the frames overcoming the threshold. This behavior is controlled by check_face_size_flag and min_face_size arguments of the class. It is also possible to compute the frame differences for a limited number of frames specifying the number_of_frames parameter.

Parameters:

number_of_framesint

The number of frames to extract the frame differences from. If None, all frames of the input video are used. Default: None.

min_face_sizeint

The minimal size of the face in pixels. Only valid when check_face_size_flag is set to True. Default: 50.

check_face_size(frame_container, annotations, min_face_size)[source]

Return the FrameContainer containing the frames with faces of the size overcoming the specified threshold. The annotations for the selected frames are also returned.

Parameters:

frame_containerFrameContainer

Video data stored in the FrameContainer, see bob.bio.video.utils.FrameContainer for further details.

annotationsdict

A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

min_face_sizeint

The minimal size of the face in pixels.

Returns:

selected_framesFrameContainer

Selected frames stored in the FrameContainer.

selected_annotationsdict

A dictionary containing the annotations for selected frames. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

comp_face_bg_diff(frames, annotations, number_of_frames=None)[source]

This function computes the frame differences for both facial and background regions. These parameters are computed for number_of_frames frames in the input FrameContainer.

Parameters:

framesFrameContainer

RGB video data stored in the FrameContainer, see bob.bio.video.utils.FrameContainer for further details.

annotationsdict

A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

number_of_framesint

The number of frames to use in processing. If None, all frames of the input video are used. Default: None.

Returns:

diff2D numpy.ndarray

An array of the size (number_of_frames - 1) x 2. The first column contains frame differences of facial regions. The second column contains frame differences of non-facial/background regions.

eval_background_differences(previous, current, annotations, border=None)[source]

Evaluates the normalized frame difference on the background.

If bounding_box is None or invalid, returns 0.

Parameters:

previous2D numpy.ndarray

Previous frame as a gray-scaled image

current2D numpy.ndarray

The current frame as a gray-scaled image

annotationsdict

A dictionary containing annotations of the face bounding box. Dictionary must be as follows {'topleft': (row, col), 'bottomright': (row, col)}.

borderint

The border size to consider. If set to None, consider all image from the face location up to the end. Default: None.

Returns:

bgfloat

A size normalized integral difference of non-facial regions in two input images.

eval_face_differences(previous, current, annotations)[source]

Evaluates the normalized frame difference on the face region.

If bounding_box is None or invalid, returns 0.

Parameters:

previous2D numpy.ndarray

Previous frame as a gray-scaled image

current2D numpy.ndarray

The current frame as a gray-scaled image

annotationsdict

A dictionary containing annotations of the face bounding box. Dictionary must be as follows {'topleft': (row, col), 'bottomright': (row, col)}.

Returns:

facefloat

A size normalized integral difference of facial regions in two input images.

select_annotated_frames(frames, annotations)[source]

Select only annotated frames in the input FrameContainer frames.

Parameters:

framesFrameContainer

Video data stored in the FrameContainer, see bob.bio.video.utils.FrameContainer for further details.

annotationsdict

A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

Returns:

cleaned_frame_containerFrameContainer

FrameContainer containing the annotated frames only.

cleaned_annotationsdict

A dictionary containing the annotations for each frame in the output video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}. Where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

class bob.pad.face.preprocessor.VideoSparseCoding(block_size=5, block_length=10, min_face_size=50, norm_face_size=64, dictionary_file_names=[], frame_step=1, extract_histograms_flag=False, method='hist', comp_reconstruct_err_flag=False, **kwargs)

Bases: bob.bio.base.preprocessor.Preprocessor, object

This class is designed to compute sparse codes for spatial frontal, spatio-temporal horizontal, and spatio-temporal vertical patches. The codes are computed for all possible stacks of facial images. The maximum possible number of stacks is: (num_of_frames_in_video - block_length). However, this number can be smaller, and is controlled by two arguments of this class: min_face_size and frame_step.

Parameters:

block_sizeint

The spatial size of facial patches. Default: 5 .

block_lengthint

The temporal length of the stack of facial images / number of frames per stack. Default: 10 .

min_face_sizeint

Discard frames with face of the size less than min_face_size. Default: 50 .

norm_face_sizeint

The size of the face after normalization. Default: 64 .

dictionary_file_names[str]

A list of filenames containing the dictionaries. The filenames must be listed in the following order: [file_name_pointing_to_frontal_dictionary, file_name_pointing_to_horizontal_dictionary, file_name_pointing_to_vertical_dictionary]

frame_stepint

Selected frames for processing with this step. If set to 1, all frames will be processes. Used to speed up the experiments. Default: 1.

extract_histograms_flagbool

If this flag is set to True the histograms of sparse codes will be computed for all stacks of facial images / samples. In this case an empty feature extractor must be used, because feature vectors (histograms) are already extracted in the preprocessing step.

NOTE: set this flag to``True`` if you want to reduce the amount of memory required to store temporary files. Default: False.

methodstr

A method to use in the histogram computation. Two options are available: “mean” and “hist”. This argument is valid only if extract_histograms_flag is set to True. Default: “hist”.

comp_reconstruct_err_flagbool

If this flag is set to True resulting feature vector will be a reconstruction error, not a histogram. Default: False.

comp_hist_of_sparse_codes(sparse_codes, method)[source]

Compute the histograms of sparse codes.

Parameters:

sparse_codes[[2D numpy.ndarray]]

A list of lists of 2D arrays. Each 2D array contains sparse codes of a particular stack of facial images. The length of internal lists is equal to the number of processed frames. The outer list contains the codes for frontal, horizontal and vertical patches, thus the length of an outer list in the context of this class is 3.

methodstr

Name of the method to be used for combining the sparse codes into a single feature vector. Two options are possible: “mean” and “hist”. If “mean” is selected the mean for n_samples dimension is first computed. The resulting vectors for various types of patches are then concatenated into a single feature vector. If “hist” is selected, the values in the input array are first binarized setting all non-zero elements to one. The rest of the process is similar to the “mean” combination method.

Returns:

frame_containerFrameContainer

FrameContainer containing the frames with sparse codes for the frontal, horizontal and vertical patches. Each frame is a 3D array. The dimensionality of array is: (3 x n_samples x n_words_in_the_dictionary).

compute_mse_for_all_patches_types(sparse_codes_list, original_data_list, dictionary_list)[source]

This function computes mean squared errors (MSE) for all types of patches: frontal, horizontal, and vertical. In this case the function compute_patches_mean_squared_errors is called in a loop for all values in the input lists.

Parameters:

sparse_codes_list[2D numpy.ndarray]

A list with arrays of sparse codes. Each row in the arrays contains a sparse code encoding a vectorized patch of particular type. The dimensionality of the each array: (n_samples x n_words_in_dictionary).

original_data_list[2D numpy.ndarray]

A list of arrays with original vectorized patches of various types. The dimensionality of the arrays might be different for various types of the patches: (n_samples x n_features_in_patch_of_particular_type).

dictionary_list[2D numpy.ndarray]

A list of dictionaries with vectorized visual words of various types. The dimensionality of the arrays might be different for various types of the patches: (n_words_in_dictionary x n_features_in_patch_of_particular_type).

Returns:

squared_errors2D numpy.ndarray

First row: MSE of features for various types of patches concatenated into a single vector. Second row: The same as above but MSE are sorted for each type of patches. The dimensionality of the array: (2 x n_features_in_patch_of_all_types).

compute_mse_for_all_stacks(video_codes_list, patches_list, dictionary_list)[source]

Call compute_mse_for_all_patches_types for data coming from all stacks of facial images.

Parameters:

video_codes_list[ [2D numpy.ndarray] ]

A list with frontal_video_codes, horizontal_video_codes, and vertical_video_codes as returned by get_sparse_codes_for_list_of_patches method of this class.

patches_list[ [2D numpy.ndarray] ]

A list with frontal_patches, horizontal_patches, and vertical_patches as returned by extract_patches_from_blocks method of this class.

dictionary_list[2D numpy.ndarray]

A list of dictionaries with vectorized visual words of various types. The dimensionality of the arrays might be different for various types of the patches: (n_words_in_dictionary x n_features_in_patch_of_particular_type).

Returns:

squared_errors_list[2D numpy.ndarray]

A list of squared_errors as returned by compute_mse_for_all_patches_types method of this class.

compute_patches_mean_squared_errors(sparse_codes, original_data, dictionary)[source]

This function computes normalized mean squared errors (MSE) for each feature (column) in the reconstructed array of vectorized patches. The patches are reconstructed given array of sparse codes and a dictionary.

Parameters:

sparse_codes2D numpy.ndarray

An array of sparse codes. Each row contains a sparse code encoding a vectorized patch. The dimensionality of the array: (n_samples x n_words_in_dictionary).

original_data2D numpy.ndarray

An array with original vectorized patches. The dimensionality of the array: (n_samples x n_features_in_patch).

dictionary2D numpy.ndarray

A dictionary with vectorized visual words. The dimensionality of the array: (n_words_in_dictionary x n_features_in_patch).

Returns:

squared_errors1D numpy.ndarray

Normalzied MSE for each feature across all patches/samples. The dimensionality of the array: (n_features_in_patch, ).

convert_arrays_to_frame_container(list_of_arrays)[source]

Convert an input list of arrays into Frame Container.

Parameters:

list_of_arrays[numpy.ndarray]

A list of arrays.

Returns:

frame_containerFrameContainer

FrameContainer containing the feature vectors.

convert_frame_cont_to_grayscale_array(frame_cont)[source]

Convert color video stored in the frame container into 3D array storing gray-scale frames. The dimensions of the output array are: (n_frames x n_rows x n_cols).

Parameters:

framesFrameContainer

Video data stored in the FrameContainer, see bob.bio.video.utils.FrameContainer for further details.

Returns:

result_array3D numpy.ndarray

A stack of gray-scale frames. The size of the array is (n_frames x n_rows x n_cols).

convert_sparse_codes_to_frame_container(sparse_codes)[source]

Convert an input list of lists of 2D arrays / sparse codes into Frame Container. Each frame in the output Frame Container is a 3D array which stacks 3 2D arrays representing particular frame / stack of facial images.

Parameters:

sparse_codes[[2D numpy.ndarray]]

A list of lists of 2D arrays. Each 2D array contains sparse codes of a particular stack of facial images. The length of internal lists is equal to the number of processed frames. The outer list contains the codes for frontal, horizontal and vertical patches, thus the length of an outer list in the context of this class is 3.

Returns:

frame_containerFrameContainer

FrameContainer containing the frames with sparse codes for the frontal, horizontal and vertical patches. Each frame is a 3D array. The dimensionality of array is: (3 x n_samples x n_words_in_the_dictionary).

crop_norm_face_grayscale(image, annotations, face_size)[source]

This function crops the face in the input Gray-scale image given annotations defining the face bounding box. The size of the face is also normalized to the pre-defined dimensions.

The algorithm is identical to the following paper: “On the Effectiveness of Local Binary Patterns in Face Anti-spoofing”

Parameters:

image2D numpy.ndarray

Gray-scale input image.

annotationsdict

A dictionary containing annotations of the face bounding box. Dictionary must be as follows: {'topleft': (row, col), 'bottomright': (row, col)}

face_sizeint

The size of the face after normalization.

Returns:

normbbx2D numpy.ndarray

Cropped facial image of the size (self.face_size, self.face_size).

crop_norm_faces_grayscale(images, annotations, face_size)[source]

This function crops and normalizes faces in a stack of images given annotations of the face bounding box for the first image in the stack.

Parameters:

images3D numpy.ndarray

A stack of gray-scale input images. The size of the array is (n_images x n_rows x n_cols).

annotationsdict

A dictionary containing annotations of the face bounding box. Dictionary must be as follows: {'topleft': (row, col), 'bottomright': (row, col)}

face_sizeint

The size of the face after normalization.

Returns:

normbbx3D numpy.ndarray

A stack of normalized faces.

extract_patches_from_blocks(all_blocks)[source]

Extract frontal, central-horizontal and central-vertical patches from all blocks returned by get_all_blocks_from_color_channel method of this class. The patches are returned in a vectorized form.

Parameters:

all_blocks[[3D numpy.ndarray]]

Internal list contains all possible 3D blocks/volumes extracted from a particular stack of facial images. The dimensions of each 3D block: (block_length x block_size x block_size). The number of possible blocks is: (norm_face_size - block_size)^2.

The length of the outer list is equal to the number of possible facial stacks in the input video: (num_of_frames_in_video - block_length). However, the final number of facial volumes might be less than above, because frames with small faces ( < min_face_size ) are discarded.

Returns:

frontal_patches[2D numpy.ndarray]

Each element in the list contains an array of vectorized frontal patches for the particular stack of facial images. The size of each array is: ( (norm_face_size - block_size)^2 x block_size``^2 ). The maximum length of the list is: (``num_of_frames_in_video - block_length)

horizontal_patches[2D numpy.ndarray]

Each element in the list contains an array of vectorized horizontal patches for the particular stack of facial images. The size of each array is: ( (norm_face_size - block_size)^2 x block_length``*``block_size ). The maximum length of the list is: (num_of_frames_in_video - block_length)

vertical_patches[2D numpy.ndarray]

Each element in the list contains an array of vectorized vertical patches for the particular stack of facial images. The size of each array is: ( (norm_face_size - block_size)^2 x block_length``*``block_size ). The maximum length of the list is: (num_of_frames_in_video - block_length)

get_all_blocks_from_color_channel(video, annotations, block_size, block_length, min_face_size, norm_face_size)[source]

Extract all 3D blocks from facial region of the input 3D array. Input 3D array represents one color channel of the video or a gray-scale video. Blocks are extracted from all 3D facial volumes. Facial volumes overlap with a shift of one frame.

The size of the facial volume is: (block_length x norm_face_size x norm_face_size).

The maximum number of available facial volumes in the video: (num_of_frames_in_video - block_length). However the final number of facial volumes might be less than above, because frames with small faces ( < min_face_size ) are discarded.

Parameters:

video3D numpy.ndarray

A stack of gray-scale input images. The size of the array is (n_images x n_rows x n_cols).

annotationsdict

A dictionary containing the annotations for each frame in the video. Dictionary structure: annotations = {'1': frame1_dict, '2': frame1_dict, ...}, where frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)} is the dictionary defining the coordinates of the face bounding box in frame N.

block_sizeint

The spatial size of facial patches.

block_lengthint

The temporal length of the stack of facial images / number of frames per stack.

min_face_sizeint

Discard frames with face of the size less than min_face_size.

norm_face_sizeint

The size of the face after normalization.

Returns:

all_blocks[[3D numpy.ndarray]]

Internal list contains all possible 3D blocks/volumes extracted from a particular stack of facial images. The dimensions of each 3D block: (block_length x block_size x block_size). The number of possible blocks is: (norm_face_size - block_size)^2.

The length of the outer list is equal to the number of possible facial stacks in the input video: (num_of_frames_in_video - block_length). However, the final number of facial volumes might be less than above, because frames with small faces ( < min_face_size ) are discarded.

get_sparse_codes_for_list_of_patches(list_of_patches, dictionary)[source]

Compute sparse codes for each array of vectorized patches in the list. This function just calls get_sparse_codes_for_patches method for each element of the input list.

Parameters:

patches[2D numpy.ndarray]

A list of vectorized patches to be reconstructed. The dimensionality of each array in the list: (n_samples x n_features).

dictionary2D numpy.ndarray

A dictionary to use for patch reconstruction. The dimensions are: (n_words_in_dictionary x n_features)

Returns:

video_codes[2D numpy.ndarray]

A list of arrays with reconstruction sparse codes for each patch. The dimensionality of each array in the list is: (n_samples x n_words_in_the_dictionary).

get_sparse_codes_for_patches(patches, dictionary)[source]

This function computes a reconstruction sparse codes for a set of patches given dictionary to reconstruct the patches from. The OMP sparse coding algorithm is used for that. The maximum amount of non-zero entries in the sparse code is: num_of_features/5.

Parameters:

patches2D numpy.ndarray

A vectorized patches to be reconstructed. The dimensionality is: (n_samples x n_features).

dictionary2D numpy.ndarray

A dictionary to use for patch reconstruction. The dimensions are: (n_words_in_dictionary x n_features)

Returns:

codes2D numpy.ndarray

An array of reconstruction sparse codes for each patch. The dimensionality is: (n_samples x n_words_in_the_dictionary).

load_array_from_hdf5(file_name)[source]

Load an array from the hdf5 file given name of the file.

Parameters:

file_namestr

Name of the file.

Returns:

datanumpy.ndarray

Downloaded array.

load_the_dictionaries(dictionary_file_names)[source]

Download dictionaries, given names of the files containing them. The dictionaries are precomputed.

Parameters:

dictionary_file_names[str]

A list of filenames containing the dictionary. The filenames must be listed in the following order: [file_name_pointing_to_frontal_dictionary, file_name_pointing_to_horizontal_dictionary, file_name_pointing_to_vertical_dictionary]

Returns:

dictionary_frontal2D numpy.ndarray

A dictionary to use for reconstruction of frontal patches. The dimensions are: (n_words_in_dictionary x n_features_front)

dictionary_horizontal2D numpy.ndarray

A dictionary to use for reconstruction of horizontal patches. The dimensions are: (n_words_in_dictionary x n_features_horizont)

dictionary_vertical2D numpy.ndarray

A dictionary to use for reconstruction of vertical patches. The dimensions are: (n_words_in_dictionary x n_features_vert)

mean_std_normalize(features, features_mean=None, features_std=None)[source]

The features in the input 2D array are mean-std normalized. The rows are samples, the columns are features. If features_mean and features_std are provided, then these vectors will be used for normalization. Otherwise, the mean and std of the features is computed on the fly.

Parameters:

features2D numpy.ndarray

Array of features to be normalized.

features_mean1D numpy.ndarray

Mean of the features. Default: None.

features_std2D numpy.ndarray

Standart deviation of the features. Default: None.

Returns:

features_norm2D numpy.ndarray

Normalized array of features.

features_mean1D numpy.ndarray

Mean of the features.

features_std1D numpy.ndarray

Standart deviation of the features.

read_data(file_name)[source]

Reads the preprocessed data from file. This method overwrites the read_data() method of the Preprocessor class.

Parameters:

file_namestr

name of the file.

Returns:

framesbob.bio.video.FrameContainer

Frames stored in the frame container.

select_all_blocks(images, block_size)[source]

Extract all possible 3D blocks from a stack of images.

images3D numpy.ndarray

A stack of gray-scale input images. The size of the array is (n_images x n_rows x n_cols).

block_sizeint

The spatial size of patches. The size of extracted 3D blocks is: (n_images x block_size x block_size).

select_random_patches(frontal_patches, horizontal_patches, vertical_patches, n_patches)[source]

Select random patches given lists of frontal, central-horizontal and central-vertical patches, as returned by extract_patches_from_blocks method of this class.

Parameters:

frontal_patches[2D numpy.ndarray]

Each element in the list contains an array of vectorized frontal patches for the particular stack of facial images. The size of each array is: ( (norm_face_size - block_size)^2 x block_size``^2 ). The maximum length of the list is: (``num_of_frames_in_video - block_length)

horizontal_patches[2D numpy.ndarray]

Each element in the list contains an array of vectorized horizontal patches for the particular stack of facial images. The size of each array is: ( (norm_face_size - block_size)^2 x block_length``*``block_size ). The maximum length of the list is: (num_of_frames_in_video - block_length)

vertical_patches[2D numpy.ndarray]

Each element in the list contains an array of vectorized vertical patches for the particular stack of facial images. The size of each array is: ( (norm_face_size - block_size)^2 x block_length``*``block_size ). The maximum length of the list is: (num_of_frames_in_video - block_length)

n_patchesint

Number of randomly selected patches.

Returns:

selected_frontal_patches[2D numpy.ndarray]

An array of selected frontal patches. The dimensionality of the array: (n_patches x number_of_features).

selected_horizontal_patches[2D numpy.ndarray]

An array of selected horizontal patches. The dimensionality of the array: (n_patches x number_of_features).

selected_vertical_patches[2D numpy.ndarray]

An array of vertical selected patches. The dimensionality of the array: (n_patches x number_of_features).

write_data(frames, file_name)[source]

Writes the given data (that has been generated using the __call__ function of this class) to file. This method overwrites the write_data() method of the Preprocessor class.

Parameters:

frames :

data returned by the __call__ method of the class.

file_namestr

name of the file.

class bob.pad.face.preprocessor.VideoFaceCropAlignBlockPatch(preprocessors, channel_names, return_multi_channel_flag=False, block_patch_preprocessor=None, get_face_contour_mask_dict=None, append_mask_flag=False, feature_extractor=None)

Bases: bob.bio.base.preprocessor.Preprocessor, object

This class is designed to first detect, crop and align face in all input channels, and then to extract patches from the ROI in the cropped faces.

The computation flow is the following:

  1. Detect, crop and align facial region in all input channels.

  2. Concatenate all channels forming a single multi-channel video data.

  3. Extract multi-channel patches from the ROI of the multi-channel video data.

  4. Vectorize extracted patches.

Parameters:

preprocessorsdict

A dictionary containing preprocessors for all channels. Dictionary structure is the following: {channel_name_1: bob.bio.video.preprocessor.Wrapper, `` ``channel_name_2: bob.bio.video.preprocessor.Wrapper, ...} Note: video, not image, preprocessors are expected.

channel_names[str]

A list of chanenl names. Channels will be processed in this order.

return_multi_channel_flagbool

If this flag is set to True, a multi-channel video data will be returned. Otherwise, patches extracted from ROI of the video are returned. Default: False.

block_patch_preprocessorobject

An instance of the bob.pad.face.preprocessor.BlockPatch class, which is used to extract multi-spectral patches from ROI of the facial region.

get_face_contour_mask_dictdict or None

Kwargs for the get_face_contour_mask() function. See description of this function for more details. If not None, a binary mask of the face will be computed. Patches outside of the mask are set to zero. Default: None

append_mask_flagbool

If set to True, mask will be flattened and concatenated to output array of patches. NOTE: mame sure extractor is capable of handling this case in case you set this flag to True. Default: False

feature_extractorobject

An instance of the feature extractor to be applied to the patches. Default is None, meaning that patches are returned by the preprocessor, and no feature extraction is applied. Defining feature_extractor instance can be usefull, for example, when saving the pathes is taking too much memory. Note, that feature_extractor should be able to process FrameContainers. Default: None

read_data(file_name)[source]

Reads the preprocessed data from file. This method overwrites the read_data() method of the Preprocessor class.

Parameters:

file_namestr

name of the file.

Returns:

framesbob.bio.video.FrameContainer

Frames stored in the frame container.

write_data(frames, file_name)[source]

Writes the given data (that has been generated using the __call__ function of this class) to file. This method overwrites the write_data() method of the Preprocessor class.

Parameters:

frames :

data returned by the __call__ method of the class.

file_namestr

name of the file.

class bob.pad.face.preprocessor.BlockPatch(patch_size, step, use_annotations_flag=True)

Bases: bob.bio.base.preprocessor.Preprocessor, object

This class is designed to extract patches from the ROI in the input image. The ROI/block to extract patches from is defined by the top-left and bottom-right coordinates of the bounding box. Patches can be extracted from the loactions of the nodes of the uniform grid. Size of the grid cell is defined by the step parameter. Patches are of the square shape, and the number of extracted patches is equal to the number of nodes. All possible patches will be extracted from the ROI. If ROI is not defined, the entire image will be considered as ROI.

Parameters:

patch_sizeint

The size of the square patch to extract from image. The dimensionality of extracted patches: num_channels x patch_size x patch_size, where num_channels is the number of channels in the input image.

stepint

Defines the size of the cell of the uniform grid to extract patches from. Patches will be extracted from the locations of the grid nodes.

use_annotations_flagbool

A flag defining if annotations should be used in the call method. If False, patches from the whole image will be extracted. If True, patches from the ROI defined by the annotations will be extracted, Default: True.

class bob.pad.face.preprocessor.LiPulseExtraction(indent=10, lambda_=300, window=3, framerate=25, bp_order=32, debug=False, **kwargs)

Bases: bob.bio.base.preprocessor.Preprocessor

Extract pulse signal from a video sequence.

The pulse is extracted according to a simplified version of Li’s CVPR 14 algorithm.

It is described in: X. Li, J, Komulainen, G. Zhao, P-C Yuen and M. Pietikäinen “Generalized face anti-spoofing by detecting pulse from face videos” Intl Conf on Pattern Recognition (ICPR), 2016

See the documentation of bob.rppg.base

Note that this is a simplified version of the original pulse extraction algorithms (mask detection in each frame instead of tracking, no illumination correction, no motion pruning)

indent

Indent (in percent of the face width) to apply to keypoints to get the mask.

Type

int

lamda_

the lamba value of the detrend filter

Type

int

window

The size of the window of the average filter

Type

int

framerate

The framerate of the video sequence.

Type

int

bp_order

The order of the bandpass filter

Type

int

debug

Plot some stuff

Type

bool

class bob.pad.face.preprocessor.Chrom(skin_threshold=0.5, skin_init=False, framerate=25, bp_order=32, window_size=0, motion=0.0, debug=False, **kwargs)

Bases: bob.bio.base.preprocessor.Preprocessor, object

Extract pulse signal from a video sequence.

The pulse is extracted according to the CHROM algorithm.

See the documentation of bob.rppg.base

skin_threshold

The threshold for skin color probability

Type

float

skin_init

If you want to re-initailize the skin color distribution at each frame

Type

bool

framerate

The framerate of the video sequence.

Type

int

bp_order

The order of the bandpass filter

Type

int

window_size

The size of the window in the overlap-add procedure.

Type

int

motion

The percentage of frames you want to select where the signal is “stable”. 0 mean all the sequence.

Type

float

debug

Plot some stuff

Type

bool

skin_filter

The skin color filter

Type

bob.ip.skincolorfilter.SkinColorFilter

class bob.pad.face.preprocessor.SSR(skin_threshold=0.5, skin_init=False, stride=25, debug=False, **kwargs)

Bases: bob.bio.base.preprocessor.Preprocessor, object

Extract pulse signal from a video sequence.

The pulse is extracted according to the SSR algorithm.

See the documentation of :py:module::bob.rppg.base

skin_threshold

The threshold for skin color probability

Type

float

skin_init

If you want to re-initailize the skin color distribution at each frame

Type

bool

stride

The temporal stride.

Type

int

debug

Plot some stuff

Type

boolean

skin_filter

The skin color filter

Type

:py:class::bob.ip.skincolorfilter.SkinColorFilter

class bob.pad.face.preprocessor.PPGSecure(framerate=25, bp_order=32, debug=False, **kwargs)

Bases: bob.bio.base.preprocessor.Preprocessor

This class extract the pulse signal from a video sequence.

The pulse is extracted according to what is described in the following article:

E.M Nowara, A. Sabharwal and A. Veeraraghavan, “PPGSecure: Biometric Presentation Attack Detection using Photoplethysmograms”, IEEE Intl Conf. on Automatic Face and Gesture Recognition, 2017.

framerate

The framerate of the video sequence.

Type

int

bp_order

The order of the bandpass filter

Type

int

debug

Plot some stuff

Type

bool

class bob.pad.face.preprocessor.ImagePatches(block_size, block_overlap=(0, 0), n_random_patches=None, **kwargs)

Bases: bob.bio.base.preprocessor.Preprocessor

Extracts patches of images and returns it in a FrameContainer. You need to wrap the further blocks (extractor and algorithm) that come after this in bob.bio.video wrappers.

class bob.pad.face.preprocessor.VideoPatches(block_size, block_overlap=(0, 0), n_random_patches=None, normalizer=None, **kwargs)

Bases: bob.bio.video.preprocessor.Wrapper

Extracts patches of images from video containers and returns it in a FrameContainer.

Feature Extractors

class bob.pad.face.extractor.LBPHistogram(lbptype='uniform', elbptype='regular', rad=1, neighbors=8, circ=False, dtype=None, n_hor=1, n_vert=1)

Bases: bob.bio.base.extractor.Extractor

Calculates a normalized LBP histogram over an image. These features are implemented based on [CAM12].

Parameters
  • lbptype (str) – The type of the LBP operator (regular, uniform or riu2)

  • elbptype (str) – The type of extended version of LBP (regular if not extended version is used, otherwise transitional, direction_coded or modified)

  • rad (float) – The radius of the circle on which the points are taken (for circular LBP)

  • neighbors (int) – The number of points around the central point on which LBP is computed (4, 8, 16)

  • circ (bool) – True if circular LBP is needed, False otherwise

  • n_hor (int) – Number of blocks horizontally for spatially-enhanced LBP/MCT histograms. Default: 1

  • n_vert – Number of blocks vertically for spatially-enhanced LBP/MCT histograms. Default: 1

dtype

If a dtype is specified in the contructor, it is assured that the resulting features have that dtype.

Type

numpy.dtype

lbp

The LPB extractor object.

Type

bob.ip.base.LBP

comp_block_histogram(data)[source]

Extracts LBP/MCT histograms from a gray-scale image/block.

Takes data of arbitrary dimensions and linearizes it into a 1D vector; Then, calculates the histogram. enforcing the data type, if desired.

Parameters

data (numpy.ndarray) – The preprocessed data to be transformed into one vector.

Returns

The extracted feature vector, of the desired dtype (if specified)

Return type

1D numpy.ndarray

class bob.pad.face.extractor.ImageQualityMeasure(galbally=True, msu=True, dtype=None, **kwargs)

Bases: bob.bio.base.extractor.Extractor

This class is designed to extract Image Quality Measures given input RGB image. For further documentation and description of features, see “bob.ip.qualitymeasure”.

Parameters:

galballybool

If True, galbally features will be added to the features. Default: True.

msubool

If True, MSU features will be added to the features. Default: True.

dtypenumpy.dtype

The data type of the resulting feature vector. Default: None.

class bob.pad.face.extractor.FrameDiffFeatures(window_size, overlap=0)

Bases: bob.bio.base.extractor.Extractor

This class is designed to extract features describing frame differences.

The class allows to compute the following features in the window of the length defined by window_size argument:

  1. The minimum value observed on the cluster

  2. The maximum value observed on the cluster

  3. The mean value observed

  4. The standard deviation on the cluster (unbiased estimator)

  5. The DC ratio (D) as defined by:

\[D(N) = (\sum_{i=1}^N{|FFT_i|}) / (|FFT_0|)\]

Parameters:

window_sizeint

The size of the window to use for feature computation.

overlapint

Determines the window overlapping; this number has to be between 0 (no overlapping) and ‘window-size’-1. Default: 0.

cluster_5quantities(arr, window_size, overlap)[source]

Calculates the clustered values as described at the paper: Counter- Measures to Photo Attacks in Face Recognition: a public database and a baseline, Anjos & Marcel, IJCB’11.

This script will output a number of clustered observations containing the 5 described quantities for windows of a configurable size (N):

  1. The minimum value observed on the cluster

  2. The maximum value observed on the cluster

  3. The mean value observed

  4. The standard deviation on the cluster (unbiased estimator)

  5. The DC ratio (D) as defined by:

\[D(N) = (\sum_{i=1}^N{|FFT_i|}) / (|FFT_0|)\]

Note

We always ignore the first entry from the input array as, by definition, it is always zero.

Parameters:

arr1D numpy.ndarray

A 1D array containg frame differences.

window_sizeint

The size of the window to use for feature computation.

overlapint

Determines the window overlapping; this number has to be between 0 (no overlapping) and ‘window-size’-1.

Returns:

retval2D numpy.ndarray

Array of features without nan samples. Rows - samples, columns - features. Here sample corresponds to features computed from the particular window of the length window_size.

comp_features(data, window_size, overlap)[source]

This function computes features for frame differences in the facial and non-facial regions.

Parameters:

data2D numpy.ndarray

An input array of frame differences in facial and non-facial regions. The first column contains frame differences of facial regions. The second column contains frame differences of non-facial/background regions.

window_sizeint

The size of the window to use for feature computation.

overlapint

Determines the window overlapping; this number has to be between 0 (no overlapping) and ‘window-size’-1. Default: 0.

Returns:

framesFrameContainer

Features describing frame differences, stored in the FrameContainer.

convert_arr_to_frame_cont(data)[source]

This function converts an array of samples into a FrameContainer, where each frame stores features of a particular sample.

Parameters:

data2D numpy.ndarray

An input array of features of the size (Nr. of samples X Nr. of features).

Returns:

framesFrameContainer

Resulting FrameContainer, where each frame stores features of a particular sample.

dcratio(arr)[source]

Calculates the DC ratio as defined by the following formula:

\[D(N) = (\sum_{i=1}^N{|FFT_i|}) / (|FFT_0|)\]

Parameters:

arr1D numpy.ndarray

A 1D array containg frame differences.

Returns:

dcratiofloat

Calculated DC ratio.

read_feature(file_name)[source]

Reads the preprocessed data from file. This method overwrites the read_data() method of the Extractor class.

Parameters:

file_namestr

Name of the file.

Returns:

framesbob.bio.video.FrameContainer

Frames stored in the frame container.

remove_nan_rows(data)[source]

This function removes rows of nan’s from the input array. If the input array contains nan’s only, then an array of ones of the size (1 x n_features) is returned.

Parameters:

data2D numpy.ndarray

An input array of features. Rows - samples, columns - features.

Returns:

ret_arr2D numpy.ndarray

Array of features without nan samples. Rows - samples, columns - features.

write_feature(frames, file_name)[source]

Writes the given data (that has been generated using the __call__ function of this class) to file. This method overwrites the write_data() method of the Extractor class.

Parameters:

frames :

Data returned by the __call__ method of the class.

file_namestr

Name of the file.

class bob.pad.face.extractor.LiSpectralFeatures(framerate=25, nfft=512, debug=False, **kwargs)

Bases: bob.bio.base.extractor.Extractor, object

Compute features from pulse signals in the three color channels.

The features are described in the following article:

X. Li, J. Komulainen, G. Zhao, P-C Yuen and M. Pietikainen, Generalized Face Anti-spoofing by Detecting Pulse From Face Videos Intl Conf. on Pattern Recognition (ICPR), 2016.

framerate

The sampling frequency of the signal (i.e the framerate …)

Type

int

nfft

Number of points to compute the FFT

Type

int

debug

Plot stuff

Type

bool

class bob.pad.face.extractor.LTSS(window_size=25, framerate=25, nfft=64, concat=False, debug=False, time=0, **kwargs)

Bases: bob.bio.base.extractor.Extractor, object

Compute Long-term spectral statistics of a pulse signal.

The features are described in the following article:

H. Muckenhirn, P. Korshunov, M. Magimai-Doss, and S. Marcel Long-Term Spectral Statistics for Voice Presentation Attack Detection, IEEE/ACM Trans. Audio, Speech and Language Processing. vol 25, n. 11, 2017

window_size

The size of the window where FFT is computed

Type

int

framerate

The sampling frequency of the signal (i.e the framerate …)

Type

int

nfft

Number of points to compute the FFT

Type

int

debug

Plot stuff

Type

bool

concat

Flag if you would like to concatenate features from the 3 color channels

Type

bool

time

The length of the signal to consider (in seconds)

Type

int

class bob.pad.face.extractor.PPGSecure(framerate=25, nfft=32, debug=False, **kwargs)

Bases: bob.bio.base.extractor.Extractor, object

Extract frequency spectra from pulse signals.

The feature are extracted according to what is described in the following article:

E.M Nowara, A. Sabharwal and A. Veeraraghavan, “PPGSecure: Biometric Presentation Attack Detection using Photoplethysmograms”, IEEE Intl Conf. on Automatic Face and Gesture Recognition, 2017.

framerate

The sampling frequency of the signal (i.e the framerate …)

Type

int

nfft

Number of points to compute the FFT

Type

int

debug

Plot stuff

Type

bool

Matching Algorithms

class bob.pad.base.algorithm.Algorithm(performs_projection=False, requires_projector_training=True, **kwargs)

Bases: object

This is the base class for all anti-spoofing algorithms. It defines the minimum requirements for all derived algorithm classes.

Call the constructor in derived class implementations. If your derived algorithm performs feature projection, please register this here. If it needs training for the projector, please set this here, too.

Parameters:

performs_projectionbool

Set to True if your derived algorithm performs a projection. Also implement the project() function, and the load_projector() if necessary.

requires_projector_trainingbool

Only valid, when performs_projection = True. Set this flag to False, when the projection is applied, but the projector does not need to be trained.

kwargskey=value pairs

A list of keyword arguments to be written in the __str__ function.

load_projector(projector_file)[source]

Loads the parameters required for feature projection from file. This function usually is useful in combination with the train_projector() function. In this base class implementation, it does nothing.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from.

project(feature) → projected[source]

This function will project the given feature. It must be overwritten by derived classes, as soon as performs_projection = True was set in the constructor. It is assured that the load_projector() was called once before the project function is executed.

Parameters:

featureobject

The feature to be projected.

Returns:

projectedobject

The projected features. Must be writable with the write_feature() function and readable with the read_feature() function.

read_feature(feature_file) → feature[source]

Reads the projected feature from file. In this base class implementation, it uses bob.io.base.load() to do that. If you have different format, please overwrite this function.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

feature_filestr or bob.io.base.HDF5File

The file open for reading, or the file name to read from.

Returns:

featureobject

The feature that was read from file.

score(toscore) → score[source]

This function will compute the score for the given object toscore. It must be overwritten by derived classes.

Parameters:

toscoreobject

The object to compute the score for. This will be the output of extractor if performs_projection is False, otherwise this will be the output of project method of the algorithm.

Returns:

scorefloat

A score value for the object toscore.

score_for_multiple_projections(toscore)[source]

scorescore_for_multiple_projections(toscore) -> score

This function will compute the score for a list of objects in toscore. It must be overwritten by derived classes.

Parameters:

toscore[object]

A list of objects to compute the score for.

Returns:

scorefloat

A score value for the object toscore.

train_projector(training_features, projector_file)[source]

This function can be overwritten to train the feature projector. If you do this, please also register the function by calling this base class constructor and enabling the training by requires_projector_training = True.

Parameters:

training_features[object] or [[object]]

A list of extracted features that can be used for training the projector. Features will be provided in a single list

projector_filestr

The file to write. This file should be readable with the load_projector() function.

write_feature(feature, feature_file)[source]

Saves the given projected feature to a file with the given name. In this base class implementation:

  • If the given feature has a save attribute, it calls feature.save(bob.io.base.HDF5File(feature_file), 'w'). In this case, the given feature_file might be either a file name or a bob.io.base.HDF5File.

  • Otherwise, it uses bob.io.base.save() to do that.

If you have a different format, please overwrite this function.

Please register ‘performs_projection = True’ in the constructor to enable this function.

Parameters:

featureobject

A feature as returned by the project() function, which should be written.

feature_filestr or bob.io.base.HDF5File

The file open for writing, or the file name to write to.

class bob.pad.base.algorithm.SVM(machine_type='C_SVC', kernel_type='RBF', n_samples=10000, trainer_grid_search_params={'cost': [0.03125, 0.125, 0.5, 2, 8, 32, 128, 512, 2048, 8192, 32768], 'gamma': [3.0517578125e-05, 0.0001220703125, 0.00048828125, 0.001953125, 0.0078125, 0.03125, 0.125, 0.5, 2, 8]}, mean_std_norm_flag=False, frame_level_scores_flag=False, save_debug_data_flag=True, reduced_train_data_flag=False, n_train_samples=50000)

Bases: bob.pad.base.algorithm.Algorithm

This class is designed to train SVM given features (either numpy arrays or Frame Containers) from real and attack classes. The trained SVM is then used to classify the testing data as either real or attack. The SVM is trained in two stages. First, the best parameters for SVM are estimated using train and cross-validation subsets. The size of the subsets used in hyper-parameter tuning is defined by n_samples parameter of this class. Once best parameters are determined, the SVM machine is trained using complete training set.

Parameters:

machine_typestr

A type of the SVM machine. Please check bob.learn.libsvm for more details. Default: ‘C_SVC’.

kernel_typestr

A type of kerenel for the SVM machine. Please check bob.learn.libsvm for more details. Default: ‘RBF’.

n_samplesint

Number of uniformly selected feature vectors per class defining the sizes of sub-sets used in the hyper-parameter grid search.

trainer_grid_search_paramsdict

Dictionary containing the hyper-parameters of the SVM to be tested in the grid-search. Default: {‘cost’: [2**p for p in range(-5, 16, 2)], ‘gamma’: [2**p for p in range(-15, 4, 2)]}.

mean_std_norm_flagbool

Perform mean-std normalization of data if set to True. Default: False.

frame_level_scores_flagbool

Return scores for each frame individually if True. Otherwise, return a single score per video. Should be used only when features are in Frame Containers. Default: False.

save_debug_data_flagbool

Save the data, which might be usefull for debugging if True. Default: True.

reduced_train_data_flagbool

Reduce the amount of final training samples if set to True. Default: False.

n_train_samplesint

Number of uniformly selected feature vectors per class defining the sizes of sub-sets used in the final traing of the SVM. Default: 50000.

comp_prediction_precision(machine, real, attack)[source]

This function computes the precision of the predictions as a ratio of correctly classified samples to the total number of samples.

Parameters:

machineobject

A pre-trained SVM machine.

real2D numpy.ndarray

Array of features representing the real class.

attack2D numpy.ndarray

Array of features representing the attack class.

Returns:

precisionfloat

The precision of the predictions.

load_projector(projector_file)[source]

Load the pretrained projector/SVM from file to perform a feature projection. This function usually is useful in combination with the train_projector() function.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from.

project(feature)[source]

This function computes class probabilities for the input feature using pretrained SVM. The feature in this case is a Frame Container with features for each frame. The probabilities will be computed and returned for each frame.

Set performs_projection = True in the constructor to enable this function. It is assured that the load_projector() was called before the project function is executed.

Parameters:

featureobject

A Frame Container conteining the features of an individual, see bob.bio.video.utils.FrameContainer.

Returns:

probabilities1D or 2D numpy.ndarray

2D in the case of two-class SVM. An array containing class probabilities for each frame. First column contains probabilities for each frame being a real class. Second column contains probabilities for each frame being an attack class. 1D in the case of one-class SVM. Vector with scores for each frame defining belonging to the real class. Must be writable with the write_feature function and readable with the read_feature function.

score(toscore)[source]

Returns a probability of a sample being a real class.

Parameters:

toscore1D or 2D numpy.ndarray

2D in the case of two-class SVM. An array containing class probabilities for each frame. First column contains probabilities for each frame being a real class. Second column contains probabilities for each frame being an attack class. 1D in the case of one-class SVM. Vector with scores for each frame defining belonging to the real class.

Returns:

scorefloat or a 1D numpy.ndarray

If frame_level_scores_flag = False a single score is returned. One score per video. Score is a probability of a sample being a real class. If frame_level_scores_flag = True a 1D array of scores is returned. One score per frame. Score is a probability of a sample being a real class.

score_for_multiple_projections(toscore)[source]

Returns a list of scores computed by the score method of this class.

Parameters:

toscore1D or 2D numpy.ndarray

2D in the case of two-class SVM. An array containing class probabilities for each frame. First column contains probabilities for each frame being a real class. Second column contains probabilities for each frame being an attack class. 1D in the case of one-class SVM. Vector with scores for each frame defining belonging to the real class.

Returns:

list_of_scores[float]

A list containing the scores.

train_projector(training_features, projector_file)[source]

Train SVM feature projector and save the trained SVM to a given file. The requires_projector_training = True flag must be set to True to enable this function.

Parameters:

training_features[[FrameContainer], [FrameContainer]]

A list containing two elements: [0] - a list of Frame Containers with feature vectors for the real class; [1] - a list of Frame Containers with feature vectors for the attack class.

projector_filestr

The file to save the trained projector to. This file should be readable with the load_projector() function.

train_svm(training_features, n_samples=10000, machine_type='C_SVC', kernel_type='RBF', trainer_grid_search_params={'cost': [0.03125, 0.125, 0.5, 2, 8, 32, 128, 512, 2048, 8192, 32768], 'gamma': [3.0517578125e-05, 0.0001220703125, 0.00048828125, 0.001953125, 0.0078125, 0.03125, 0.125, 0.5, 2, 8]}, mean_std_norm_flag=False, projector_file='', save_debug_data_flag=True, reduced_train_data_flag=False, n_train_samples=50000)[source]

First, this function tunes the hyper-parameters of the SVM classifier using grid search on the sub-sets of training data. Train and cross-validation subsets for both classes are formed from the available input training_features.

Once successfull parameters are determined the SVM is trained on the whole training data set. The resulting machine is returned by the function.

Parameters:

training_features[[FrameContainer], [FrameContainer]]

A list containing two elements: [0] - a list of Frame Containers with feature vectors for the real class; [1] - a list of Frame Containers with feature vectors for the attack class.

n_samplesint

Number of uniformly selected feature vectors per class defining the sizes of sub-sets used in the hyper-parameter grid search.

machine_typestr

A type of the SVM machine. Please check bob.learn.libsvm for more details.

kernel_typestr

A type of kerenel for the SVM machine. Please check bob.learn.libsvm for more details.

trainer_grid_search_paramsdict

Dictionary containing the hyper-parameters of the SVM to be tested in the grid-search.

mean_std_norm_flagbool

Perform mean-std normalization of data if set to True. Default: False.

projector_filestr

The name of the file to save the trained projector to. Only the path of this file is used in this function. The file debug_data.hdf5 will be save in this path. This file contains information, which might be usefull for debugging.

save_debug_data_flagbool

Save the data, which might be usefull for debugging if True. Default: True.

reduced_train_data_flagbool

Reduce the amount of final training samples if set to True. Default: False.

n_train_samplesint

Number of uniformly selected feature vectors per class defining the sizes of sub-sets used in the final traing of the SVM. Default: 50000.

Returns:

machineobject

A trained SVM machine.

class bob.pad.base.algorithm.OneClassGMM(n_components=1, random_state=3, frame_level_scores_flag=False, covariance_type='full', reg_covar=1e-06, normalize_features=False)

Bases: bob.pad.base.algorithm.Algorithm

This class is designed to train a OneClassGMM based PAD system. The OneClassGMM is trained using data of one class (real class) only. The procedure is the following:

  1. First, the training data is mean-std normalized using mean and std of the real class only.

  2. Second, the OneClassGMM with n_components Gaussians is trained using samples of the real class.

  3. The input features are next classified using pre-trained OneClassGMM machine.

Parameters:

n_componentsint

Number of Gaussians in the OneClassGMM. Default: 1 .

random_stateint

A seed for the random number generator used in the initialization of the OneClassGMM. Default: 3 .

frame_level_scores_flagbool

Return scores for each frame individually if True. Otherwise, return a single score per video. Default: False.

load_gmm_machine_and_mean_std(projector_file)[source]

Loads the machine, features mean and std from the hdf5 file. The absolute name of the file is specified in projector_file string.

Parameters:

projector_filestr

Absolute name of the file to load the trained projector from, as returned by bob.pad.base framework.

Returns:

machineobject

The loaded OneClassGMM machine. As returned by sklearn.mixture module.

features_mean1D numpy.ndarray

Mean of the features.

features_std1D numpy.ndarray

Standart deviation of the features.

load_projector(projector_file)[source]

Loads the machine, features mean and std from the hdf5 file. The absolute name of the file is specified in projector_file string.

This function sets the arguments self.machine, self.features_mean and self.features_std of this class with loaded machines.

The function must be capable of reading the data saved with the train_projector() method of this class.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from, as returned by the bob.pad.base framework. In this class the names of the files to read the projectors from are modified, see load_machine and load_cascade_of_machines methods of this class for more details.

project(feature)[source]

This function computes a vector of scores for each sample in the input array of features. The following steps are applied:

  1. First, the input data is mean-std normalized using mean and std of the real class only.

  2. The input features are next classified using pre-trained OneClassGMM machine.

Set performs_projection = True in the constructor to enable this function. It is assured that the load_projector() was called before the project function is executed.

Parameters:

featureFrameContainer or 2D numpy.ndarray

Two types of inputs are accepted. A Frame Container conteining the features of an individual, see bob.bio.video.utils.FrameContainer. Or a 2D feature array of the size (N_samples x N_features).

Returns:

scores1D numpy.ndarray

Vector of scores. Scores for the real class are expected to be higher, than the scores of the negative / attack class. In this case scores are the weighted log probabilities.

save_gmm_machine_and_mean_std(projector_file, machine, features_mean, features_std)[source]

Saves the OneClassGMM machine, features mean and std to the hdf5 file. The absolute name of the file is specified in projector_file string.

Parameters:

projector_filestr

Absolute name of the file to save the data to, as returned by bob.pad.base framework.

machineobject

The OneClassGMM machine to be saved. As returned by sklearn.linear_model module.

features_mean1D numpy.ndarray

Mean of the features.

features_std1D numpy.ndarray

Standart deviation of the features.

score(toscore)[source]

Returns a probability of a sample being a real class.

Parameters:

toscore1D numpy.ndarray

Vector with scores for each frame/sample defining the probability of the frame being a sample of the real class.

Returns:

score[float]

If frame_level_scores_flag = False a single score is returned. One score per video. This score is placed into a list, because the score must be an iterable. Score is a probability of a sample being a real class. If frame_level_scores_flag = True a list of scores is returned. One score per frame/sample.

train_gmm(real)[source]

Train OneClassGMM classifier given real class. Prior to the training the data is mean-std normalized.

Parameters:

real2D numpy.ndarray

Training features for the real class.

Returns:

machineobject

A trained OneClassGMM machine.

features_mean1D numpy.ndarray

Mean of the features.

features_std1D numpy.ndarray

Standart deviation of the features.

train_projector(training_features, projector_file)[source]

Train OneClassGMM for feature projection and save it to file. The requires_projector_training = True flag must be set to True to enable this function.

Parameters:

training_features[[FrameContainer], [FrameContainer]]

A list containing two elements: [0] - a list of Frame Containers with feature vectors for the real class; [1] - a list of Frame Containers with feature vectors for the attack class.

projector_filestr

The file to save the trained projector to, as returned by the bob.pad.base framework.

class bob.pad.base.algorithm.OneClassGMM2(number_of_gaussians, kmeans_training_iterations=25, gmm_training_iterations=25, training_threshold=0.0005, variance_threshold=0.0005, update_weights=True, update_means=True, update_variances=True, n_threads=40, preprocessor=None, **kwargs)

Bases: bob.pad.base.algorithm.Algorithm

A one class GMM implementation based on Bob’s GMM implementation which is more stable than scikit-learn’s one.

load_projector(projector_file)[source]

Loads the parameters required for feature projection from file. This function usually is useful in combination with the train_projector() function. In this base class implementation, it does nothing.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from.

project(feature) → projected[source]

This function will project the given feature. It must be overwritten by derived classes, as soon as performs_projection = True was set in the constructor. It is assured that the load_projector() was called once before the project function is executed.

Parameters:

featureobject

The feature to be projected.

Returns:

projectedobject

The projected features. Must be writable with the write_feature() function and readable with the read_feature() function.

score(toscore) → score[source]

This function will compute the score for the given object toscore. It must be overwritten by derived classes.

Parameters:

toscoreobject

The object to compute the score for. This will be the output of extractor if performs_projection is False, otherwise this will be the output of project method of the algorithm.

Returns:

scorefloat

A score value for the object toscore.

train_projector(training_features, projector_file)[source]

This function can be overwritten to train the feature projector. If you do this, please also register the function by calling this base class constructor and enabling the training by requires_projector_training = True.

Parameters:

training_features[object] or [[object]]

A list of extracted features that can be used for training the projector. Features will be provided in a single list

projector_filestr

The file to write. This file should be readable with the load_projector() function.

class bob.pad.base.algorithm.GMM(number_of_gaussians, kmeans_training_iterations=25, gmm_training_iterations=10, training_threshold=0.0005, variance_threshold=0.0005, update_weights=True, update_means=True, update_variances=True, responsibility_threshold=0, INIT_SEED=5489, performs_projection=True, requires_projector_training=True, **kwargs)[source]

Bases: bob.pad.base.algorithm.Algorithm

Trains two GMMs for two classes of PAD and calculates log likelihood ratio during evaluation.

train_gmm(array)[source]
save_gmms(projector_file)[source]

Save projector to file

train_projector(training_features, projector_file)[source]

This function can be overwritten to train the feature projector. If you do this, please also register the function by calling this base class constructor and enabling the training by requires_projector_training = True.

Parameters:

training_features[object] or [[object]]

A list of extracted features that can be used for training the projector. Features will be provided in a single list

projector_filestr

The file to write. This file should be readable with the load_projector() function.

load_projector(projector_file)[source]

Loads the parameters required for feature projection from file. This function usually is useful in combination with the train_projector() function. In this base class implementation, it does nothing.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from.

project(feature) → projected[source]

Projects the given feature into GMM space.

Parameters:

feature1D numpy.ndarray

The 1D feature to be projected.

Returns:

projected1D numpy.ndarray

The feature projected into GMM space.

score(toscore)[source]

Returns the difference between log likelihoods of being real or attack

score_for_multiple_projections(toscore)[source]

Returns the difference between log likelihoods of being real or attack

class bob.pad.base.algorithm.LogRegr(C=1, frame_level_scores_flag=False, subsample_train_data_flag=False, subsampling_step=10, subsample_videos_flag=False, video_subsampling_step=3)

Bases: bob.pad.base.algorithm.Algorithm

This class is designed to train Logistic Regression classifier given Frame Containers with features of real and attack classes. The procedure is the following:

  1. First, the input data is mean-std normalized using mean and std of the real class only.

  2. Second, the Logistic Regression classifier is trained on normalized input features.

  3. The input features are next classified using pre-trained LR machine.

Parameters:

Cfloat

Inverse of regularization strength in LR classifier; must be a positive. Like in support vector machines, smaller values specify stronger regularization. Default: 1.0 .

frame_level_scores_flagbool

Return scores for each frame individually if True. Otherwise, return a single score per video. Default: False.

subsample_train_data_flagbool

Uniformly subsample the training data if True. Default: False.

subsampling_stepint

Training data subsampling step, only valid is subsample_train_data_flag = True. Default: 10 .

subsample_videos_flagbool

Uniformly subsample the training videos if True. Default: False.

video_subsampling_stepint

Training videos subsampling step, only valid is subsample_videos_flag = True. Default: 3 .

load_lr_machine_and_mean_std(projector_file)[source]

Loads the machine, features mean and std from the hdf5 file. The absolute name of the file is specified in projector_file string.

Parameters:

projector_filestr

Absolute name of the file to load the trained projector from, as returned by bob.pad.base framework.

Returns:

machineobject

The loaded LR machine. As returned by sklearn.linear_model module.

features_mean1D numpy.ndarray

Mean of the features.

features_std1D numpy.ndarray

Standart deviation of the features.

load_projector(projector_file)[source]

Loads the machine, features mean and std from the hdf5 file. The absolute name of the file is specified in projector_file string.

This function sets the arguments self.lr_machine, self.features_mean and self.features_std of this class with loaded machines.

The function must be capable of reading the data saved with the train_projector() method of this class.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from, as returned by the bob.pad.base framework. In this class the names of the files to read the projectors from are modified, see load_machine and load_cascade_of_machines methods of this class for more details.

project(feature)[source]

This function computes a vector of scores for each sample in the input array of features. The following steps are apllied:

  1. First, the input data is mean-std normalized using mean and std of the real class only.

  2. The input features are next classified using pre-trained LR machine.

Set performs_projection = True in the constructor to enable this function. It is assured that the load_projector() was called before the project function is executed.

Parameters:

featureFrameContainer or 2D numpy.ndarray

Two types of inputs are accepted. A Frame Container conteining the features of an individual, see bob.bio.video.utils.FrameContainer. Or a 2D feature array of the size (N_samples x N_features).

Returns:

scores1D numpy.ndarray

Vector of scores. Scores for the real class are expected to be higher, than the scores of the negative / attack class. In this case scores are probabilities.

save_lr_machine_and_mean_std(projector_file, machine, features_mean, features_std)[source]

Saves the LR machine, features mean and std to the hdf5 file. The absolute name of the file is specified in projector_file string.

Parameters:

projector_filestr

Absolute name of the file to save the data to, as returned by bob.pad.base framework.

machineobject

The LR machine to be saved. As returned by sklearn.linear_model module.

features_mean1D numpy.ndarray

Mean of the features.

features_std1D numpy.ndarray

Standart deviation of the features.

score(toscore)[source]

Returns a probability of a sample being a real class.

Parameters:

toscore1D numpy.ndarray

Vector with scores for each frame/sample defining the probability of the frame being a sample of the real class.

Returns:

score[float]

If frame_level_scores_flag = False a single score is returned. One score per video. This score is placed into a list, because the score must be an iterable. Score is a probability of a sample being a real class. If frame_level_scores_flag = True a list of scores is returned. One score per frame/sample.

subsample_train_videos(training_features, step)[source]

Uniformly select subset of frmae containes from the input list

Parameters:

training_features[FrameContainer]

A list of FrameContainers

stepint

Data selection step.

Returns:

training_features_subset[FrameContainer]

A list with selected FrameContainers

train_lr(real, attack, C)[source]

Train LR classifier given real and attack classes. Prior to training the data is mean-std normalized.

Parameters:

real2D numpy.ndarray

Training features for the real class.

attack2D numpy.ndarray

Training features for the attack class.

Cfloat

Inverse of regularization strength in LR classifier; must be a positive. Like in support vector machines, smaller values specify stronger regularization. Default: 1.0 .

Returns:

machineobject

A trained LR machine.

features_mean1D numpy.ndarray

Mean of the features.

features_std1D numpy.ndarray

Standart deviation of the features.

train_projector(training_features, projector_file)[source]

Train LR for feature projection and save them to files. The requires_projector_training = True flag must be set to True to enable this function.

Parameters:

training_features[[FrameContainer], [FrameContainer]]

A list containing two elements: [0] - a list of Frame Containers with feature vectors for the real class; [1] - a list of Frame Containers with feature vectors for the attack class.

projector_filestr

The file to save the trained projector to, as returned by the bob.pad.base framework.

class bob.pad.base.algorithm.SVMCascadePCA(machine_type='C_SVC', kernel_type='RBF', svm_kwargs={'cost': 1, 'gamma': 0}, N=2, pos_scores_slope=0.01, frame_level_scores_flag=False)

Bases: bob.pad.base.algorithm.Algorithm

This class is designed to train the cascede of SVMs given Frame Containers with features of real and attack classes. The procedure is the following:

  1. First, the input data is mean-std normalized.

  2. Second, the PCA is trained on normalized input features. Only the features of the real class are used in PCA training, both for one-class and two-class SVMs.

  3. The features are next projected given trained PCA machine.

  4. Prior to SVM training the features are again mean-std normalized.

  5. Next SVM machine is trained for each N projected features. First, preojected features corresponding to highest eigenvalues are selected. N is usually small N = (2, 3). So, if N = 2, the first SVM is trained for projected features 1 and 2, second SVM is trained for projected features 3 and 4, and so on.

  6. These SVMs then form a cascade of classifiers. The input feature vector is then projected using PCA machine and passed through all classifiers in the cascade. The decision is then made by majority voting.

Both one-class SVM and two-class SVM cascades can be trained. In this implementation the grid search of SVM parameters is not supported.

Parameters:

machine_typestr

A type of the SVM machine. Please check bob.learn.libsvm for more details. Default: ‘C_SVC’.

kernel_typestr

A type of kerenel for the SVM machine. Please check bob.learn.libsvm for more details. Default: ‘RBF’.

svm_kwargsdict

Dictionary containing the hyper-parameters of the SVM. Default: {‘cost’: 1, ‘gamma’: 0}.

Nint

The number of features to be used for training a single SVM machine in the cascade. Default: 2.

pos_scores_slopefloat

The positive scores returned by SVM cascade will be multiplied by this constant prior to majority voting. Default: 0.01 .

frame_level_scores_flagbool

Return scores for each frame individually if True. Otherwise, return a single score per video. Default: False.

combine_scores_of_svm_cascade(scores_array, pos_scores_slope)[source]

First, multiply positive scores by constant pos_scores_slope in the input 2D array. The constant is usually small, making the impact of negative scores more significant. Second, the a single score per sample is obtained by avaraging the pre-modified scores of the cascade.

Parameters:

scores_array2D numpy.ndarray

2D score array of the size (N_samples x N_scores).

pos_scores_slopefloat

The positive scores returned by SVM cascade will be multiplied by this constant prior to majority voting. Default: 0.01 .

Returns:

scores1D numpy.ndarray

Vector of scores. Scores for the real class are expected to be higher, than the scores of the negative / attack class.

comp_prediction_precision(machine, real, attack)[source]

This function computes the precision of the predictions as a ratio of correctly classified samples to the total number of samples.

Parameters:

machineobject

A pre-trained SVM machine.

real2D numpy.ndarray

Array of features representing the real class.

attack2D numpy.ndarray

Array of features representing the attack class.

Returns:

precisionfloat

The precision of the predictions.

get_cascade_file_names(projector_file, projector_file_name)[source]

Get the list of file-names storing the cascade of machines. The location of the files is specified in the path component of the projector_file argument.

Parameters:

projector_filestr

Absolute name of the file to load the trained projector from, as returned by bob.pad.base framework. In this function only the path component is used.

projector_file_namestr

The common string in the names of files storing the cascade of pretrained machines. Name without extension.

Returns:

cascade_file_names[str]

A list of of relative file-names storing the cascade of machines.

get_data_start_end_idx(data, N)[source]

Get indexes to select the subsets of data related to the cascades. First (n_machines - 1) SVMs will be trained using N features. Last SVM will be trained using remaining features, which is less or equal to N.

Parameters:

data2D numpy.ndarray

Data array containing the training features. The dimensionality is (N_samples x N_features).

Nint

Number of features per single SVM.

Returns:

idx_start[int]

Starting indexes for data subsets.

idx_end[int]

End indexes for data subsets.

n_machinesint

Number of SVMs to be trained.

load_cascade_of_machines(projector_file, projector_file_name)[source]

Loades a cascade of machines from the hdf5 files. The name of the file is specified in projector_file_name string and will be augumented with a number of the machine. The location is specified in the path component of the projector_file string.

Parameters:

projector_filestr

Absolute name of the file to load the trained projector from, as returned by bob.pad.base framework. In this function only the path component is used.

projector_file_namestr

The relative name of the file to load the machine from. This name will be augumented with a number of the machine. Name without extension.

Returns:

machinesdict

A cascade of machines. The key in the dictionary is the number of the machine, value is the machine itself.

load_machine(projector_file, projector_file_name)[source]

Loads the machine from the hdf5 file. The name of the file is specified in projector_file_name string. The location is specified in the path component of the projector_file string.

Parameters:

projector_filestr

Absolute name of the file to load the trained projector from, as returned by bob.pad.base framework. In this function only the path component is used.

projector_file_namestr

The relative name of the file to load the machine from. Name without extension.

Returns:

machineobject

A machine loaded from file.

load_projector(projector_file)[source]

Load the pretrained PCA machine and a cascade of SVM classifiers from files to perform feature projection. This function sets the arguments self.pca_machine and self.svm_machines of this class with loaded machines.

The function must be capable of reading the data saved with the train_projector() method of this class.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from, as returned by the bob.pad.base framework. In this class the names of the files to read the projectors from are modified, see load_machine and load_cascade_of_machines methods of this class for more details.

project(feature)[source]

This function computes a vector of scores for each sample in the input array of features. The following steps are apllied:

  1. Convert input array to numpy array if necessary.

  2. Project features using pretrained PCA machine.

  3. Apply the cascade of SVMs to the preojected features.

  4. Compute a single score per sample by combining the scores produced by the cascade of SVMs. The combination is done using combine_scores_of_svm_cascade method of this class.

Set performs_projection = True in the constructor to enable this function. It is assured that the load_projector() was called before the project function is executed.

Parameters:

featureFrameContainer or 2D numpy.ndarray

Two types of inputs are accepted. A Frame Container conteining the features of an individual, see bob.bio.video.utils.FrameContainer. Or a 2D feature array of the size (N_samples x N_features).

Returns:

scores1D numpy.ndarray

Vector of scores. Scores for the real class are expected to be higher, than the scores of the negative / attack class.

save_cascade_of_machines(projector_file, projector_file_name, machines)[source]

Saves a cascade of machines to the hdf5 files. The name of the file is specified in projector_file_name string and will be augumented with a number of the machine. The location is specified in the path component of the projector_file string.

Parameters:

projector_filestr

Absolute name of the file to save the trained projector to, as returned by bob.pad.base framework. In this function only the path component is used.

projector_file_namestr

The relative name of the file to save the machine to. This name will be augumented with a number of the machine. Name without extension.

machinesdict

A cascade of machines. The key in the dictionary is the number of the machine, value is the machine itself.

save_machine(projector_file, projector_file_name, machine)[source]

Saves the machine to the hdf5 file. The name of the file is specified in projector_file_name string. The location is specified in the path component of the projector_file string.

Parameters:

projector_filestr

Absolute name of the file to save the trained projector to, as returned by bob.pad.base framework. In this function only the path component is used.

projector_file_namestr

The relative name of the file to save the machine to. Name without extension.

machineobject

The machine to be saved.

score(toscore)[source]

Returns a probability of a sample being a real class.

Parameters:

toscore1D or 2D numpy.ndarray

2D in the case of two-class SVM. An array containing class probabilities for each frame. First column contains probabilities for each frame being a real class. Second column contains probabilities for each frame being an attack class. 1D in the case of one-class SVM. Vector with scores for each frame defining belonging to the real class.

Returns:

score[float]

If frame_level_scores_flag = False a single score is returned. One score per video. This score is placed into a list, because the score must be an iterable. Score is a probability of a sample being a real class. If frame_level_scores_flag = True a list of scores is returned. One score per frame/sample.

train_pca(data)[source]

Train PCA given input array of feature vectors. The data is mean-std normalized prior to PCA training.

Parameters:

data2D numpy.ndarray

Array of feature vectors of the size (N_samples x N_features). The features must be already mean-std normalized.

Returns:

machinebob.learn.linear.Machine

The PCA machine that has been trained. The mean-std normalizers are also set in the machine.

eig_vals1D numpy.ndarray

The eigen-values of the PCA projection.

train_pca_svm_cascade(real, attack, machine_type, kernel_type, svm_kwargs, N)[source]

This function is designed to train the cascede of SVMs given features of real and attack classes. The procedure is the following:

  1. First, the PCA machine is trained also incorporating mean-std feature normalization. Only the features of the real class are used in PCA training, both for one-class and two-class SVMs.

  2. The features are next projected given trained PCA machine.

  3. Next, SVM machine is trained for each N projected features. Prior to SVM training the features are again mean-std normalized. First, preojected features corresponding to highest eigenvalues are selected. N is usually small N = (2, 3). So, if N = 2, the first SVM is trained for projected features 1 and 2, second SVM is trained for projected features 3 and 4, and so on.

Both one-class SVM and two-class SVM cascades can be trained. In this implementation the grid search of SVM parameters is not supported.

Parameters:

real2D numpy.ndarray

Training features for the real class.

attack2D numpy.ndarray

Training features for the attack class. If machine_type == ‘ONE_CLASS’ this argument can be anything, it will be skipped.

machine_typestr

A type of the SVM machine. Please check bob.learn.libsvm for more details.

kernel_typestr

A type of kerenel for the SVM machine. Please check bob.learn.libsvm for more details.

svm_kwargsdict

Dictionary containing the hyper-parameters of the SVM.

Nint

The number of features to be used for training a single SVM machine in the cascade.

Returns:

pca_machineobject

A trained PCA machine.

svm_machinesdict

A cascade of SVM machines.

train_projector(training_features, projector_file)[source]

Train PCA and cascade of SVMs for feature projection and save them to files. The requires_projector_training = True flag must be set to True to enable this function.

Parameters:

training_features[[FrameContainer], [FrameContainer]]

A list containing two elements: [0] - a list of Frame Containers with feature vectors for the real class; [1] - a list of Frame Containers with feature vectors for the attack class.

projector_filestr

The file to save the trained projector to, as returned by the bob.pad.base framework. In this class the names of the files to save the projectors to are modified, see save_machine and save_cascade_of_machines methods of this class for more details.

train_svm(real, attack, machine_type, kernel_type, svm_kwargs)[source]

One-class or two class-SVM is trained in this method given input features. The value of attack argument is not important in the case of one-class SVM. Prior to training the data is mean-std normalized.

Parameters:

real2D numpy.ndarray

Training features for the real class.

attack2D numpy.ndarray

Training features for the attack class. If machine_type == ‘ONE_CLASS’ this argument can be anything, it will be skipped.

machine_typestr

A type of the SVM machine. Please check bob.learn.libsvm for more details.

kernel_typestr

A type of kerenel for the SVM machine. Please check bob.learn.libsvm for more details.

svm_kwargsdict

Dictionary containing the hyper-parameters of the SVM.

Returns:

machineobject

A trained SVM machine. The mean-std normalizers are also set in the machine.

train_svm_cascade(real, attack, machine_type, kernel_type, svm_kwargs, N)[source]

Train a cascade of SVMs, one SVM machine per N features. N is usually small N = (2, 3). So, if N = 2, the first SVM is trained for features 1 and 2, second SVM is trained for features 3 and 4, and so on.

Both one-class and two-class SVM cascades can be trained. The value of attack argument is not important in the case of one-class SVM.

The data is mean-std normalized prior to SVM cascade training.

Parameters:

real2D numpy.ndarray

Training features for the real class.

attack2D numpy.ndarray

Training features for the attack class. If machine_type == ‘ONE_CLASS’ this argument can be anything, it will be skipped.

machine_typestr

A type of the SVM machine. Please check bob.learn.libsvm for more details.

kernel_typestr

A type of kerenel for the SVM machine. Please check bob.learn.libsvm for more details.

svm_kwargsdict

Dictionary containing the hyper-parameters of the SVM.

Nint

The number of features to be used for training a single SVM machine in the cascade.

Returns:

machinesdict

A dictionary containing a cascade of trained SVM machines.

class bob.pad.base.algorithm.Predictions(**kwargs)

Bases: bob.pad.base.algorithm.Algorithm

An algorithm that takes the precomputed predictions and uses them for scoring.

score(toscore) → score[source]

This function will compute the score for the given object toscore. It must be overwritten by derived classes.

Parameters:

toscoreobject

The object to compute the score for. This will be the output of extractor if performs_projection is False, otherwise this will be the output of project method of the algorithm.

Returns:

scorefloat

A score value for the object toscore.

class bob.pad.base.algorithm.VideoPredictions(axis=1, frame_level_scoring=False, **kwargs)

Bases: bob.pad.base.algorithm.Algorithm

An algorithm that takes the precomputed predictions and uses them for scoring.

score(toscore) → score[source]

This function will compute the score for the given object toscore. It must be overwritten by derived classes.

Parameters:

toscoreobject

The object to compute the score for. This will be the output of extractor if performs_projection is False, otherwise this will be the output of project method of the algorithm.

Returns:

scorefloat

A score value for the object toscore.

class bob.pad.base.algorithm.MLP(hidden_units=(10, 10), max_iter=1000, precision=0.001, **kwargs)

Bases: bob.pad.base.algorithm.Algorithm

Interfaces an MLP classifier used for PAD

hidden_units

The number of hidden units in each hidden layer

Type

tuple of int

max_iter

The maximum number of training iterations

Type

int

precision

criterion to stop the training: if the difference between current and last loss is smaller than this number, then stop training.

Type

float

project(feature)[source]

Project the given feature

Parameters

feature (numpy.ndarray) – The feature to classify

Returns

The value of the two units in the last layer of the MLP.

Return type

numpy.ndarray

score(toscore)[source]

Returns the probability of the real class.

Parameters

toscore (numpy.ndarray) –

Returns

probability of the authentication attempt to be real.

Return type

float

train_projector(training_features, projector_file)[source]

Trains the MLP

Parameters
  • training_features (list of numpy.ndarray) – Data used to train the MLP. The real attempts are in training_features[0] and the attacks are in training_features[1]

  • projector_file (str) – Filename where to save the trained model.

class bob.pad.base.algorithm.PadLDA(lda_subspace_dimension=None, pca_subspace_dimension=None, use_pinv=False, **kwargs)

Bases: bob.bio.base.algorithm.LDA

Wrapper for bob.bio.base.algorithm.LDA,

Here, LDA is used in a PAD context. This means that the feature will be projected on a single dimension subspace, which acts as a score

For more details, you may want to have a look at bob.learn.linear Documentation

lda_subspace_dimension

the dimension of the LDA subspace. In the PAD case, the default value is always used, and corresponds to the number of classes in the training set (i.e. 2).

Type

int

pca_subspace_dimension

The dimension of the PCA subspace to be applied before on the data, before applying LDA.

Type

int

use_pinv

Use the pseudo-inverse in LDA computation.

Type

bool

score(model, probe) → float[source]

Computes the distance of the model to the probe using the distance function specified in the constructor.

Parameters:

model2D numpy.ndarray

The model storing all enrollment features.

probe1D numpy.ndarray

The probe feature vector in Fisher space.

Returns:

scorefloat

A similarity value between model and probe

Utilities

bob.pad.face.utils.bbx_cropper(frame, …)

bob.pad.face.utils.blocks(data, block_size)

Extracts patches of an image

bob.pad.face.utils.blocks_generator(data, …)

Yields patches of an image

bob.pad.face.utils.color_augmentation(image)

Converts an RGB image to different color channels.

bob.pad.face.utils.frames(path)

Yields the frames of a video file.

bob.pad.face.utils.min_face_size_normalizer(…)

bob.pad.face.utils.number_of_frames(path)

returns the number of frames of a video file.

bob.pad.face.utils.scale_face(face, face_height)

Scales a face image to the given size.

bob.pad.face.utils.the_giant_video_loader(…)

Loads a video pad file frame by frame and optionally applies transformations.

bob.pad.face.utils.yield_faces(padfile, cropper)

Yields face images of a padfile.

bob.pad.face.utils.frames(path)[source]

Yields the frames of a video file.

Parameters

path (str) – Path to the video file.

Yields

numpy.array – A frame of the video. The size is (3, 240, 320).

bob.pad.face.utils.number_of_frames(path)[source]

returns the number of frames of a video file.

Parameters

path (str) – Path to the video file.

Returns

The number of frames. Then, it yields the frames.

Return type

int

bob.pad.face.utils.yield_faces(padfile, cropper, normalizer=None)[source]

Yields face images of a padfile. It uses the annotations from the database. The annotations are further normalized.

Parameters
  • padfile (bob.pad.base.database.PadFile) – The padfile to return the faces.

  • cropper (callable) – A face image cropper that works with database’s annotations.

  • normalizer (callable) – If not None, it should be a function that takes all the annotations of the whole video and yields normalized annotations frame by frame. It should yield same as annotations.items().

Yields

numpy.array – Face images

Raises

ValueError – If the database returns None for annotations.

bob.pad.face.utils.scale_face(face, face_height, face_width=None)[source]

Scales a face image to the given size.

Parameters
  • face (numpy.array) – The face image. It can be 2D or 3D in bob image format.

  • face_height (int) – The height of the scaled face.

  • face_width (None, optional) – The width of the scaled face. If None, face_height is used.

Returns

The scaled face.

Return type

numpy.array

bob.pad.face.utils.blocks(data, block_size, block_overlap=(0, 0))[source]

Extracts patches of an image

Parameters
  • data (numpy.array) – The image in gray-scale, color, or color video format.

  • block_size ((int, int)) – The size of patches

  • block_overlap ((int, int), optional) – The size of overlap of patches

Returns

The patches.

Return type

numpy.array

Raises

ValueError – If data dimension is not between 2 and 4 (inclusive).

bob.pad.face.utils.bbx_cropper(frame, annotations)[source]
bob.pad.face.utils.min_face_size_normalizer(annotations, max_age=15, **kwargs)[source]
bob.pad.face.utils.color_augmentation(image, channels=('rgb', ))[source]

Converts an RGB image to different color channels.

Parameters
  • image (numpy.array) – The image in RGB Bob format.

  • channels (tuple, optional) – List of channels to convert the image to. It can be any of rgb, yuv, hsv.

Returns

The image that contains several channels: (3*len(channels), height, width).

Return type

numpy.array

bob.pad.face.utils.blocks_generator(data, block_size, block_overlap=(0, 0))[source]

Yields patches of an image

Parameters
  • data (numpy.array) – The image in gray-scale, color, or color video format.

  • block_size ((int, int)) – The size of patches

  • block_overlap ((int, int), optional) – The size of overlap of patches

Yields

numpy.array – The patches.

Raises

ValueError – If data dimension is not between 2 and 4 (inclusive).

bob.pad.face.utils.the_giant_video_loader(paddb, padfile, region='whole', scaling_factor=None, cropper=None, normalizer=None, patches=False, block_size=(96, 96), block_overlap=(0, 0), random_patches_per_frame=None, augment=None, multiple_bonafide_patches=1, keep_pa_samples=None, keep_bf_samples=None)[source]

Loads a video pad file frame by frame and optionally applies transformations.

Parameters
  • paddb – Ignored.

  • padfile – The pad file

  • region (str) – Either whole or crop. If whole, it will return the whole frame. Otherwise, you need to provide a cropper and a normalizer.

  • scaling_factor (float) – If given, will scale images to this factor.

  • cropper – The cropper to use

  • normalizer – The normalizer to use

  • patches (bool) – If true, will extract patches from images.

  • block_size (tuple) – Size of the patches

  • block_overlap (tuple) – Size of overlap of the patches

  • random_patches_per_frame (int) – If not None, will only take this much patches per frame

  • augment – If given, frames will be transformed using this function.

  • multiple_bonafide_patches (int) – Will use more random patches for bonafide samples

  • keep_pa_samples (float) – If given, will drop some PA samples.

  • keep_bf_samples (float) – If given, will drop some BF samples.

Returns

A generator that yields the samples.

Return type

object

Raises

ValueError – If region is not whole or crop.

bob.pad.face.utils.random_sample(A, size)[source]

Randomly selects size samples from the array A

bob.pad.face.utils.random_patches(image, block_size, n_random_patches=1)[source]

Extracts N random patches of block_size from an image

bob.pad.face.utils.extract_patches(image, block_size, block_overlap=(0, 0), n_random_patches=None)[source]

Yields either all patches from an image or N random patches.