Tools implemented in bob.bio.video

Summary

bob.bio.video.select_frames(count[, ...])

Returns indices of the frames to be selected given the parameters.

bob.bio.video.VideoAsArray(path[, ...])

A memory efficient class to load only select video frames.

bob.bio.video.VideoLikeContainer(data, ...)

bob.bio.video.transformer.VideoWrapper(...)

Wrapper class to run image preprocessing algorithms on video data.

bob.bio.video.annotator.Base()

The base class for video annotators.

bob.bio.video.annotator.Wrapper(annotator[, ...])

Annotates video files using the provided image annotator.

bob.bio.video.annotator.FailSafeVideo(annotators)

A fail-safe video annotator.

bob.bio.video.video_wrap_skpipeline(sk_pipeline)

This function takes a sklearn.Pipeline and wraps each estimator inside of it with bob.bio.video.transformer.VideoWrapper

Databases

bob.bio.video.database.YoutubeDatabase(protocol)

This package contains the access API and descriptions for the YouTube Faces database.

Details

class bob.bio.video.VideoAsArray(path, selection_style=None, max_number_of_frames=None, step_size=None, transform=None, **kwargs)

Bases: object

A memory efficient class to load only select video frames. It also supports efficient conversion to dask arrays.

class bob.bio.video.VideoLikeContainer(data, indices, **kwargs)

Bases: object

property dtype
classmethod load(file)[source]
property ndim
save(file)[source]
static save_function(other, file)[source]
property shape
bob.bio.video.get_config()[source]

Returns a string containing the configuration information.

bob.bio.video.select_frames(count, max_number_of_frames=None, selection_style=None, step_size=None)[source]

Returns indices of the frames to be selected given the parameters.

Different selection styles are supported:

  • first : The first frames are selected

  • spread : Frames are selected to be taken from the whole video with equal spaces in between.

  • step : Frames are selected every step_size indices, starting at step_size/2 Think twice if you want to have that when giving FrameContainer data!

  • all : All frames are selected unconditionally.

Parameters
  • count (int) – Total number of frames that are available

  • max_number_of_frames (int) – The maximum number of frames to be selected. Ignored when selection_style is “all”.

  • selection_style (str) – One of (first, spread, step, all). See above.

  • step_size (int) – Only useful when selection_style is step.

Returns

A range of frames to be selected.

Return type

range

Raises

ValueError – If selection_style is not one of the supported ones.

bob.bio.video.video_wrap_skpipeline(sk_pipeline)[source]

This function takes a sklearn.Pipeline and wraps each estimator inside of it with bob.bio.video.transformer.VideoWrapper

bob.bio.video.annotator.normalize_annotations(annotations, validator, max_age=-1)[source]

Normalizes the annotations of one video sequence. It fills the annotations for frames from previous ones if the annotation for the current frame is not valid.

Parameters
  • annotations (OrderedDict) – A dict of dict where the keys to the first dict are frame indices as strings (starting from 0). The inside dicts contain annotations for that frame. The dictionary needs to be an ordered dict in order for this to work.

  • validator (callable) – Takes a dict (annotations) and returns True if the annotations are valid. This can be a check based on minimal face size for example: see bob.bio.face.annotator.min_face_size_validator.

  • max_age (int, optional) – An integer indicating for a how many frames a detected face is valid if no detection occurs after such frame. A value of -1 == forever

Yields
  • str – The index of frame.

  • dict – The corrected annotations of the frame.

class bob.bio.video.annotator.Base[source]

Bases: Annotator

The base class for video annotators.

static frame_ids_and_frames(frames)[source]

Takes the frames and yields frame_ids and frames.

Parameters

frames (bob.bio.video.VideoLikeContainer or bob.bio.video.VideoAsArray or numpy.array) – The frames of the video file.

Yields
  • frame_id (str) – A string that represents the frame id.

  • frame (numpy.array) – The frame of the video file as an array.

annotate(frames, **kwargs)[source]

Annotates videos.

Parameters
Returns

A dictionary where its key is the frame id as a string and its value is a dictionary that are the annotations for that frame.

Return type

OrderedDict

Note

You can use the Base.frame_ids_and_frames functions to normalize the input in your implementation.

transform(samples)[source]

Takes a batch of data and annotates them.

Each kwargs value is a list of parameters, with each element of those lists corresponding to each element of samples (for example: with [s1, s2, ...] as samples, kwargs['annotations'] should contain [{<s1_annotations>}, {<s2_annotations>}, ...]).

class bob.bio.video.annotator.FailSafeVideo(annotators, max_age=15, validator=None, **kwargs)[source]

Bases: Base

A fail-safe video annotator. It tries several annotators in order and tries the next one if the previous one fails. However, the difference between this annotator and bob.bio.base.annotator.FailSafe is that this one tries to use annotations from older frames (if valid) before trying the next annotator.

Warning

You must be careful in using this annotator since different annotators could have different results. For example the bounding box of one annotator be totally different from another annotator.

Parameters
  • annotators (list) – A list of annotators to try.

  • max_age (int) – The maximum number of frames that an annotation is valid for next frames. This value should be positive. If you want to set max_age to infinite, then you can use the bob.bio.video.annotator.Wrapper instead.

  • validator (callable) – A function that takes the annotations of a frame and validates it.

Please see Base for more accepted parameters.

annotate(frames)[source]

See Base.annotate

class bob.bio.video.annotator.Wrapper(annotator, normalize=False, validator=None, max_age=-1, **kwargs)[source]

Bases: Base

Annotates video files using the provided image annotator. See the documentation of Base too.

Parameters

Please see Base for more accepted parameters.

Warning

You should only set normalize to True only if you are annotating all frames of the video file.

annotate(frames)[source]

See Base.annotate

class bob.bio.video.transformer.VideoWrapper(estimator, **kwargs)[source]

Bases: TransformerMixin, BaseEstimator

Wrapper class to run image preprocessing algorithms on video data.

Parameters:

estimatorstr or sklearn.base.BaseEstimator instance

The transformer to be used to preprocess the frames.

transform(videos, **kwargs)[source]
fit(X, y=None, **fit_params)[source]

Does nothing

class bob.bio.video.database.VideoBioFile(client_id, path, file_id, original_directory=None, original_extension='.avi', annotation_directory=None, annotation_extension=None, annotation_type=None, selection_style=None, max_number_of_frames=None, step_size=None, **kwargs)

Bases: BioFile

load()[source]

Loads the data at the specified location and using the given extension. Override it if you need to load differently.

Parameters
  • original_directory (str (optional)) – The path to the root of the dataset structure. If None, will try to use self.original_directory.

  • original_extension (str (optional)) – The filename extension of every files in the dataset. If None, will try to use self.original_extension.

Returns

The loaded data (normally numpy.ndarray).

Return type

object

class bob.bio.video.database.YoutubeDatabase(protocol, annotation_type='bounding-box', fixed_positions=None, original_directory='', extension='.jpg', annotation_extension='.labeled_faces.txt', frame_selector=None)

Bases: Database

This package contains the access API and descriptions for the YouTube Faces database. It only contains the Bob accessor methods to use the DB directly from python, with our certified protocols. The actual raw data for the YouTube Faces database should be downloaded from the original URL (though we were not able to contact the corresponding Professor).

Warning

To use this dataset protocol, you need to have the original files of the YOUTUBE datasets. Once you have it downloaded, please run the following command to set the path for Bob

bob config set bob.bio.face.youtube.directory [YOUTUBE PATH]

In this interface we implement the 10 original protocols of the YouTube Faces database (‘fold1’, ‘fold2’, ‘fold3’, ‘fold4’, ‘fold5’, ‘fold6’, ‘fold7’, ‘fold8’, ‘fold9’, ‘fold10’)

The code below allows you to fetch the gallery and probes of the “fold0” protocol.

>>> from bob.bio.video.database import YoutubeDatabase
>>> youtube = YoutubeDatabase(protocol="fold0")
>>>
>>> # Fetching the gallery
>>> references = youtube.references()
>>> # Fetching the probes
>>> probes = youtube.probes()
Parameters
  • protocol (str) – One of the Youtube above mentioned protocols

  • annotation_type (str) – One of the supported annotation types

  • original_directory (str) – Original directory

  • extension (str) – Default file extension

  • annotation_extension (str) –

  • frame_selector – Pointer to a function that does frame selection.

all_samples()[source]

Returns all the samples of the dataset

Parameters

groups (list or None) – List of groups to consider (like ‘dev’ or ‘eval’). If None, will return samples from all the groups.

Returns

samples – List of all the samples of the dataset.

Return type

list

background_model_samples()[source]
groups()[source]
load_file_client_id()[source]
probes(group='dev')[source]

Returns probes to score biometric references

Parameters

group (str) – Limits samples to this group

Returns

probes – List of samples for the creation of biometric probes.

Return type

list

static protocols()[source]
references(group='dev')[source]

Returns references to enroll biometric references

Parameters

group (str, optional) – Limits samples to this group

Returns

references – List of samples for the creation of biometric references.

Return type

list

static urls()[source]