Python API

This section lists all the functionality available in this library allowing to run face PAD experiments.

Database Interfaces

REPLAY-ATTACK Database

bob.pad.face.database.replay_attack.ReplayAttackPadDatabase(protocol='grandtest', selection_style=None, max_number_of_frames=None, step_size=None, annotation_directory=None, annotation_type=None, fixed_positions=None, **kwargs)

REPLAY-MOBILE Database

bob.pad.face.database.replay_mobile.ReplayMobilePadDatabase(protocol='grandtest', selection_style=None, max_number_of_frames=None, step_size=None, annotation_directory=None, annotation_type=None, fixed_positions=None, **kwargs)

Transformers

Pre-processors

class bob.pad.face.preprocessor.ImagePatches(block_size, block_overlap=(0, 0), n_random_patches=None, **kwargs)

Bases: TransformerMixin, BaseEstimator

Extracts patches of images and returns it in a VideoLikeContainer. You need to wrap the further blocks (extractor and algorithm) that come after this in bob.bio.video wrappers.

set_transform_request(*, images: bool | None | str = '$UNCHANGED$') ImagePatches

Request metadata passed to the transform method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

images (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for images parameter in transform.

Returns:

self – The updated object.

Return type:

object

transform(images)[source]
transform_one_image(image)[source]
class bob.pad.face.preprocessor.VideoPatches(face_cropper, block_size, block_overlap=(0, 0), n_random_patches=None, normalizer=None, **kwargs)

Bases: TransformerMixin, BaseEstimator

Extracts patches of images from video containers and returns it in a VideoLikeContainer.

set_transform_request(*, annotations: bool | None | str = '$UNCHANGED$', videos: bool | None | str = '$UNCHANGED$') VideoPatches

Request metadata passed to the transform method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • annotations (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for annotations parameter in transform.

  • videos (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for videos parameter in transform.

Returns:

self – The updated object.

Return type:

object

transform(videos, annotations=None)[source]
transform_one_video(frames, annotations=None)[source]

Feature Extractors

Utilities

bob.pad.face.utils.bbx_cropper(frame, ...)

bob.pad.face.utils.blocks(data, block_size)

Extracts patches of an image

bob.pad.face.utils.blocks_generator(data, ...)

Yields patches of an image

bob.pad.face.utils.color_augmentation(image)

Converts an RGB image to different color channels.

bob.pad.face.utils.frames(path)

Yields the frames of a video file.

bob.pad.face.utils.min_face_size_normalizer(...)

bob.pad.face.utils.number_of_frames(path)

returns the number of frames of a video file.

bob.pad.face.utils.scale_face(face, face_height)

Scales a face image to the given size.

bob.pad.face.utils.the_giant_video_loader(...)

Loads a video pad file frame by frame and optionally applies transformations.

bob.pad.face.utils.yield_faces(pad_sample, ...)

Yields face images of a padfile.

bob.pad.face.utils.bbx_cropper(frame, annotations)[source]
bob.pad.face.utils.blocks(data, block_size, block_overlap=(0, 0))[source]

Extracts patches of an image

Parameters:
  • data (numpy.ndarray) – The image in gray-scale, color, or color video format.

  • block_size ((int, int)) – The size of patches

  • block_overlap ((int, int), optional) – The size of overlap of patches

Returns:

The patches.

Return type:

numpy.ndarray

Raises:

ValueError – If data dimension is not between 2 and 4 (inclusive).

bob.pad.face.utils.blocks_generator(data, block_size, block_overlap=(0, 0))[source]

Yields patches of an image

Parameters:
  • data (numpy.ndarray) – The image in gray-scale, color, or color video format.

  • block_size ((int, int)) – The size of patches

  • block_overlap ((int, int), optional) – The size of overlap of patches

Yields:

numpy.ndarray – The patches.

Raises:

ValueError – If data dimension is not between 2 and 4 (inclusive).

bob.pad.face.utils.color_augmentation(image, channels=('rgb',))[source]

Converts an RGB image to different color channels.

Parameters:
  • image (numpy.ndarray) – The image in RGB Bob format.

  • channels (tuple, optional) – List of channels to convert the image to. It can be any of rgb, yuv, hsv.

Returns:

The image that contains several channels: (3*len(channels), height, width).

Return type:

numpy.ndarray

bob.pad.face.utils.extract_patches(image, block_size, block_overlap=(0, 0), n_random_patches=None)[source]

Yields either all patches from an image or N random patches.

bob.pad.face.utils.frames(path)[source]

Yields the frames of a video file.

Parameters:

path (str) – Path to the video file.

Yields:

numpy.ndarray – A frame of the video. The size is (3, 240, 320).

bob.pad.face.utils.min_face_size_normalizer(annotations, max_age=15, **kwargs)[source]
bob.pad.face.utils.number_of_frames(path)[source]

returns the number of frames of a video file.

Parameters:

path (str) – Path to the video file.

Returns:

The number of frames. Then, it yields the frames.

Return type:

int

bob.pad.face.utils.random_patches(image, block_size, n_random_patches=1)[source]

Extracts N random patches of block_size from an image

bob.pad.face.utils.random_sample(A, size)[source]

Randomly selects size samples from the array A

bob.pad.face.utils.scale_face(face, face_height, face_width=None)[source]

Scales a face image to the given size.

Parameters:
  • face (numpy.ndarray) – The face image. It can be 2D or 3D in bob image format.

  • face_height (int) – The height of the scaled face.

  • face_width (None, optional) – The width of the scaled face. If None, face_height is used.

Returns:

The scaled face.

Return type:

numpy.ndarray

bob.pad.face.utils.the_giant_video_loader(pad_sample, region='whole', scaling_factor=None, cropper=None, normalizer=None, patches=False, block_size=(96, 96), block_overlap=(0, 0), random_patches_per_frame=None, augment=None, multiple_bonafide_patches=1, keep_pa_samples=None, keep_bf_samples=None)[source]

Loads a video pad file frame by frame and optionally applies transformations.

Parameters:
  • pad_sample – The pad sample

  • region (str) – Either whole or crop. If whole, it will return the whole frame. Otherwise, you need to provide a cropper and a normalizer.

  • scaling_factor (float) – If given, will scale images to this factor.

  • cropper – The cropper to use

  • normalizer – The normalizer to use

  • patches (bool) – If true, will extract patches from images.

  • block_size (tuple) – Size of the patches

  • block_overlap (tuple) – Size of overlap of the patches

  • random_patches_per_frame (int) – If not None, will only take this much patches per frame

  • augment – If given, frames will be transformed using this function.

  • multiple_bonafide_patches (int) – Will use more random patches for bonafide samples

  • keep_pa_samples (float) – If given, will drop some PA samples.

  • keep_bf_samples (float) – If given, will drop some BF samples.

Returns:

A generator that yields the samples.

Return type:

object

Raises:

ValueError – If region is not whole or crop.

bob.pad.face.utils.yield_faces(pad_sample, cropper, normalizer=None)[source]

Yields face images of a padfile. It uses the annotations from the database. The annotations are further normalized.

Parameters:
  • pad_sample – The pad sample to return the faces.

  • cropper (collections.abc.Callable) – A face image cropper that works with database’s annotations.

  • normalizer (collections.abc.Callable) – If not None, it should be a function that takes all the annotations of the whole video and yields normalized annotations frame by frame. It should yield same as annotations.items().

Yields:

numpy.ndarray – Face images

Raises:

ValueError – If the database returns None for annotations.