Tools implemented in bob.bio.video

Summary

bob.bio.video.FrameSelector([…])

A class for selecting frames from videos.

bob.bio.video.FrameContainer([hdf5, …])

A class for reading, manipulating and saving video content.

bob.bio.video.preprocessor.Wrapper([…])

Wrapper class to run image preprocessing algorithms on video data.

bob.bio.video.extractor.Wrapper(extractor[, …])

Wrapper class to run feature extraction algorithms on frame containers.

bob.bio.video.algorithm.Wrapper(algorithm[, …])

Wrapper class to run face recognition algorithms on video data.

Annotators

bob.bio.video.annotator.Base([…])

The base class for video annotators.

bob.bio.video.annotator.Wrapper(annotator[, …])

Annotates video files using the provided image annotator.

bob.bio.video.annotator.FailSafeVideo(annotators)

A fail-safe video annotator.

Databases

bob.bio.video.database.MobioBioDatabase([…])

MOBIO database implementation of bob.bio.base.database.ZTDatabase interface.

bob.bio.video.database.YoutubeBioDatabase([…])

YouTube Faces database implementation of bob.bio.base.database.ZTBioDatabase interface.

Details

bob.bio.video.get_config()[source]

Returns a string containing the configuration information.

class bob.bio.video.FrameContainer(hdf5=None, load_function=<function load>, **kwargs)[source]

Bases: object

A class for reading, manipulating and saving video content.

add(frame_id, frame, quality=None)[source]

Adds the frame with the given id and the given quality.

load(hdf5, load_function=<function load>, selection_style='all', max_number_of_frames=20, step_size=10)[source]

Loads a previously saved FrameContainer into the current FrameContainer.

Parameters
  • hdf5 (bob.io.base.HDF5File) – An opened HDF5 file to load the data form

  • load_function (callable, optional) – the function to be used on the hdf5 object to load each frame

  • selection_style (str, optional) – See select_frames

  • max_number_of_frames (int, optional) – See select_frames

  • step_size (int, optional) – See select_frames

Returns

returns itself.

Return type

object

Raises
  • IOError – If no frames can be loaded from the hdf5 file.

  • ValueError – If the selection_style is all and you are trying to load an old format FrameContainer.

save(hdf5, save_function=<function save>)[source]

Save the content to the given HDF5 File. The contained data will be written using the given save_function.

is_similar_to(other)[source]
as_array()[source]

Returns the data of frames as a numpy array.

Returns

The frames are returned as an array with the shape of (n_frames, …) like a video.

Return type

numpy.ndarray

class bob.bio.video.FrameSelector(max_number_of_frames=20, selection_style='spread', step_size=10)[source]

Bases: object

A class for selecting frames from videos. In total, up to max_number_of_frames is selected (unless selection style is all

Different selection styles are supported:

  • first : The first frames are selected

  • spread : Frames are selected to be taken from the whole video

  • step : Frames are selected every step_size indices, starting at step_size/2 Think twice if you want to have that when giving FrameContainer data!

  • all : All frames are stored unconditionally

  • quality (only valid for FrameContainer data) : Select the frames based on the highest internally stored quality value

bob.bio.video.load_compressed(filename, load_function)[source]
bob.bio.video.save_compressed(frame_container, filename, save_function, create_link=True)[source]
bob.bio.video.select_frames(count, max_number_of_frames, selection_style, step_size)[source]

Returns indices of the frames to be selected given the parameters.

Different selection styles are supported:

  • first : The first frames are selected

  • spread : Frames are selected to be taken from the whole video with equal spaces in between.

  • step : Frames are selected every step_size indices, starting at step_size/2 Think twice if you want to have that when giving FrameContainer data!

  • all : All frames are selected unconditionally.

Parameters
  • count (int) – Total number of frames that are available

  • max_number_of_frames (int) – The maximum number of frames to be selected. Ignored when selection_style is “all”.

  • selection_style (str) – One of (first, spread, step, all). See above.

  • step_size (int) – Only useful when selection_style is step.

Returns

A range of frames to be selected.

Return type

range

Raises

ValueError – If selection_style is not one of the supported ones.

bob.bio.video.annotator.normalize_annotations(annotations, validator, max_age=-1)[source]

Normalizes the annotations of one video sequence. It fills the annotations for frames from previous ones if the annotation for the current frame is not valid.

Parameters
  • annotations (collections.OrderedDict) – A dict of dict where the keys to the first dict are frame indices as strings (starting from 0). The inside dicts contain annotations for that frame. The dictionary needs to be an ordered dict in order for this to work.

  • validator (callable) – Takes a dict (annotations) and returns True if the annotations are valid. This can be a check based on minimal face size for example: see bob.bio.face.annotator.min_face_size_validator.

  • max_age (int, optional) – An integer indicating for a how many frames a detected face is valid if no detection occurs after such frame. A value of -1 == forever

Yields
  • str – The index of frame.

  • dict – The corrected annotations of the frame.

class bob.bio.video.annotator.Base(frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>, read_original_data=None, **kwargs)

Bases: bob.bio.base.annotator.Annotator

The base class for video annotators.

Parameters
  • frame_selector (bob.bio.video.FrameSelector) – A frame selector class to define, which frames of the video to use.

  • read_original_data (callable) – A function with the signature of data = read_original_data(biofile, directory, extension) that will be used to load the data from biofiles. By default the frame_selector is used to load the data.

annotate(frames, **kwargs)[source]

Annotates videos.

Parameters
Returns

A dictionary where its key is the frame id as a string and its value is a dictionary that are the annotations for that frame.

Return type

collections.OrderedDict

Note

You can use the Base.frame_ids_and_frames functions to normalize the input in your implementation.

static frame_ids_and_frames(frames)[source]

Takes the frames and yields frame_ids and frames.

Parameters

frames (bob.bio.video.FrameContainer or an iterable of arrays) – The frames of the video file.

Yields
  • frame_id (str) – A string that represents the frame id.

  • frame (numpy.array) – The frame of the video file as an array.

class bob.bio.video.annotator.Wrapper(annotator, normalize=False, validator=<function min_face_size_validator>, max_age=-1, **kwargs)

Bases: bob.bio.video.annotator.Base

Annotates video files using the provided image annotator. See the documentation of Base too.

Parameters

Please see Base for more accepted parameters.

Warning

You should only set normalize to True only if you are annotating all frames of the video file.

annotate(frames, **kwargs)[source]

See Base.annotate

class bob.bio.video.annotator.FailSafeVideo(annotators, max_age=15, validator=<function min_face_size_validator>, **kwargs)

Bases: bob.bio.video.annotator.Base

A fail-safe video annotator. It tries several annotators in order and tries the next one if the previous one fails. However, the difference between this annotator and bob.bio.base.annotator.FailSafe is that this one tries to use annotations from older frames (if valid) before trying the next annotator.

Warning

You must be careful in using this annotator since different annotators could have different results. For example the bounding box of one annotator be totally different from another annotator.

Parameters
  • annotators (list) – A list of annotators to try.

  • max_age (int) – The maximum number of frames that an annotation is valid for next frames. This value should be positive. If you want to set max_age to infinite, then you can use the bob.bio.video.annotator.Wrapper instead.

  • validator (callable) – A function that takes the annotations of a frame and validates it.

Please see Base for more accepted parameters.

annotate(frames, **kwargs)[source]

See Base.annotate

class bob.bio.video.preprocessor.Wrapper(preprocessor='landmark-detect', frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>, quality_function=None, compressed_io=False, read_original_data=None)

Bases: bob.bio.base.preprocessor.Preprocessor

Wrapper class to run image preprocessing algorithms on video data.

This class provides functionality to read original video data from several databases. So far, the video content from bob.db.mobio and the image list content from bob.db.youtube are supported.

Furthermore, frames are extracted from these video data, and a preprocessor algorithm is applied on all selected frames. The preprocessor can either be provided as a registered resource, i.e., one of Preprocessors, or an instance of a preprocessing class. Since most of the databases do not provide annotations for all frames of the videos, commonly the preprocessor needs to apply face detection.

The frame_selector can be chosen to select some frames from the video. By default, a few frames spread over the whole video sequence are selected.

The quality_function is used to assess the quality of the frame. If no quality_function is given, the quality is based on the face detector, or simply left as None. So far, the quality of the frames are not used, but it is foreseen to select frames based on quality.

Parameters:

preprocessorstr or bob.bio.base.preprocessor.Preprocessor instance

The preprocessor to be used to preprocess the frames.

frame_selectorbob.bio.video.FrameSelector

A frame selector class to define, which frames of the video to use.

quality_functionfunction or None

A function assessing the quality of the preprocessed image. If None, no quality assessment is performed. If the preprocessor contains a quality attribute, this is taken instead.

compressed_iobool

Use compression to write the resulting preprocessed HDF5 files. This is experimental and might cause trouble. Use this flag with care.

read_original_data: callable or None

Function that loads the raw data. If not explicitly defined the raw data will be loaded by bob.bio.video.database.VideoBioFile.load() using the specified frame_selector

read_data(filename) → frames[source]

Reads the preprocessed data from file and returns them in a frame container. The preprocessors read_data function is used to read the data for each frame.

Parameters:

filenamestr

The name of the preprocessed data file.

Returns:

framesbob.bio.video.FrameContainer

The read frames, stored in a frame container.

write_data(frames, filename)[source]

Writes the preprocessed data to file.

The preprocessors write_data function is used to write the data for each frame.

Parameters:

framesbob.bio.video.FrameContainer

The preprocessed frames, as returned by the __call__ function.

filenamestr

The name of the preprocessed data file to write.

class bob.bio.video.extractor.Wrapper(extractor, frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>, compressed_io=False)

Bases: bob.bio.base.extractor.Extractor

Wrapper class to run feature extraction algorithms on frame containers.

Features are extracted for all frames in the frame container using the provided extractor. The extractor can either be provided as a registered resource, i.e., one of Feature extractors, or an instance of an extractor class.

The frame_selector can be chosen to select some frames from the frame container. By default, all frames from the previous preprocessing step are kept, but fewer frames might be selected in this stage.

Parameters:

extractorstr or bob.bio.base.extractor.Extractor instance

The extractor to be used to extract features from the frames.

frame_selectorbob.bio.video.FrameSelector

A frame selector class to define, which frames of the preprocessed frame container to use.

compressed_iobool

Use compression to write the resulting features to HDF5 files. This is experimental and might cause trouble. Use this flag with care.

load(extractor_file)[source]

Loads the trained extractor from file.

This function calls the wrapped classes load function.

extractor_filestr

The name of the extractor that should be loaded.

read_feature(filename) → frames[source]

Reads the extracted data from file and returns them in a frame container. The extractors read_feature function is used to read the data for each frame.

Parameters:

filenamestr

The name of the extracted data file.

Returns:

framesbob.bio.video.FrameContainer

The read frames, stored in a frame container.

train(training_frames, extractor_file)[source]

Trains the feature extractor with the preprocessed data of the given frames.

Note

This function is not called, when the given extractor does not require training.

This function will train the feature extractor using all data from the selected frames of the training data. The training_frames must be aligned by client if the given extractor requires that.

Parameters:

training_frames[bob.bio.video.FrameContainer] or [[bob.bio.video.FrameContainer]]

The set of training frames, which will be used to train the extractor.

extractor_filestr

The name of the extractor that should be written.

write_feature(frames, filename)[source]

Writes the extracted features to file.

The extractors write_features function is used to write the features for each frame.

Parameters:

framesbob.bio.video.FrameContainer

The extracted features for the selected frames, as returned by the __call__ function.

filenamestr

The file name to write the extracted feature into.

class bob.bio.video.algorithm.Wrapper(algorithm, frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>, compressed_io=False)

Bases: bob.bio.base.algorithm.Algorithm

Wrapper class to run face recognition algorithms on video data.

This class provides a generic interface for all face recognition algorithms to use several frames of a video. The algorithm can either be provided as a registered resource, or an instance of an extractor class. Already in previous stages, features were extracted from only some selected frames of the image. This algorithm now uses these features to perform face recognition, i.e., by enrolling a model from several frames (possibly of several videos), and fusing scores from several model frames and several probe frames. Since the functionality to handle several images for enrollment and probing is already implemented in the wrapped class, here we only care about providing the right data at the right time.

Parameters:

algorithmstr or bob.bio.base.algorithm.Algorithm instance

The algorithm to be used.

frame_selectorbob.bio.video.FrameSelector

A frame selector class to define, which frames of the extracted features of the frame container to use. By default, all features are selected.

compressed_iobool

Use compression to write the projected features to HDF5 files. This is experimental and might cause trouble. Use this flag with care.

enroll(enroll_frames) → model[source]

Enrolls the model from features of all selected frames of all enrollment videos for the current client.

This function collects all desired frames from all enrollment videos and enrolls a model with that, using the algorithms enroll function.

Parameters:

enroll_frames[bob.bio.video.FrameContainer]

Extracted or projected features from one or several videos of the same client.

Returns:

modelobject

The model as created by the algorithms enroll function.

load_enroller(enroller_file)[source]

Loads the trained enroller from file.

This function calls the wrapped classes load_enroller function.

enroller_filestr

The name of the enroller that should be loaded.

load_projector(projector_file)[source]

Loads the trained extractor from file.

This function calls the wrapped classes load_projector function.

projector_filestr

The name of the projector that should be loaded.

project(frames) → projected[source]

Projects the frames from the extracted frames and returns a frame container.

This function is used to project features using the desired algorithm for all frames that are selected by the frame_selector specified in the constructor of this class.

Parameters:

framesbob.bio.video.FrameContainer

The frame container containing extracted feature frames.

Returns:

projectedbob.bio.video.FrameContainer

A frame container containing projected features.

read_feature(projected_file) → frames[source]

Reads the projected data from file and returns them in a frame container. The algorithms read_feature function is used to read the data for each frame.

Parameters:

filenamestr

The name of the projected data file.

Returns:

framesbob.bio.video.FrameContainer

The read frames, stored in a frame container.

read_model(filename)[source]

Reads the model using the algorithms read_model function.

Parameters:

filenamestr

The file name to read the model from.

Returns:

modelobject

The model read from file.

score(model, probe) → score[source]

Computes the score between the given model and the probe.

As the probe is a frame container, several scores are computed, one for each frame of the probe. This is achieved by using the algorithms score_for_multiple_probes function. The final result is, hence, a fusion of several scores.

Parameters:

modelobject

The model in the type desired by the wrapped algorithm.

probebob.bio.video.FrameContainer

The selected frames from the probe objects, which contains the probes are desired by the wrapped algorithm.

Returns:

scorefloat

A fused score between the given model and all probe frames.

score_for_multiple_models(models, probe) → score[source]

This function computes the score between the given model list and the given probe. In this base class implementation, it computes the scores for each model using the score() method, and fuses the scores using the fusion method specified in the constructor of this class. Usually this function is called from derived class score() functions.

Parameters:

models[object]

A list of model objects.

probeobject

The probe object to compare the models with.

Returns:

scorefloat

The fused similarity between the given models and the probe.

score_for_multiple_probes(model, probes) → score[source]

Computes the score between the given model and the given list of probes.

As each probe is a frame container, several scores are computed, one for each frame of each probe. This is achieved by using the algorithms score_for_multiple_probes function. The final result is, hence, a fusion of several scores.

Parameters:

modelobject

The model in the type desired by the wrapped algorithm.

probes[bob.bio.video.FrameContainer]

The selected frames from the probe objects, which contains the probes are desired by the wrapped algorithm.

Returns:

scorefloat

A fused score between the given model and all probe frames.

train_enroller(training_frames, enroller_file)[source]

Trains the enroller with the features of the given frames.

Note

This function is not called, when the given algorithm does not require enroller training.

This function will train the enroller using all data from the selected frames of the training data.

Parameters:

training_frames[[bob.bio.video.FrameContainer]]

The set of training frames aligned by client, which will be used to perform enroller training of the algorithm.

enroller_filestr

The name of the enroller that should be written.

train_projector(training_frames, projector_file)[source]

Trains the projector with the features of the given frames.

Note

This function is not called, when the given algorithm does not require projector training.

This function will train the projector using all data from the selected frames of the training data. The training_frames must be aligned by client if the given algorithm requires that.

Parameters:

training_frames[bob.bio.video.FrameContainer] or [[bob.bio.video.FrameContainer]]

The set of training frames, which will be used to perform projector training of the algorithm.

extractor_filestr

The name of the projector that should be written.

write_feature(frames, projected_file)[source]

Writes the projected features to file.

The extractors write_features function is used to write the features for each frame.

Parameters:

framesbob.bio.video.FrameContainer

The projected features for the selected frames, as returned by the project() function.

projected_filestr

The file name to write the projetced feature into.

write_model(model, filename)[source]

Writes the model using the algorithm’s write_model function.

Parameters:

modelobject

The model returned by the enroll() function.

filenamestr

The file name of the model to write.

class bob.bio.video.database.VideoBioFile(client_id, path, file_id, **kwargs)

Bases: bob.bio.base.database.BioFile

load(directory=None, extension='.avi', frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>)[source]

Loads the data at the specified location and using the given extension. Override it if you need to load differently.

Parameters
  • directory (str, optional) – If not empty or None, this directory is prefixed to the final file destination

  • extension (str, optional) – If not empty or None, this extension is suffixed to the final file destination

Returns

The loaded data (normally numpy.ndarray).

Return type

object

class bob.bio.video.database.MobioBioDatabase(original_directory=None, original_extension=None, annotation_directory=None, annotation_extension='.pos', **kwargs)

Bases: bob.bio.base.database.ZTBioDatabase

MOBIO database implementation of bob.bio.base.database.ZTDatabase interface. It is an extension of an SQL-based database interface, which directly talks to Mobio database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]

Annotations are not available when using videos

model_ids_with_protocol(groups = None, protocol = None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

tmodel_ids_with_protocol(protocol=None, groups=None, **kwargs)[source]

This function returns the ids of the T-Norm models of the given groups for the given protocol.

Keyword parameters:

groupsstr or [str]

The groups of which the model ids should be returned. Usually, groups are one or more elements of (‘dev’, ‘eval’)

protocolstr

The protocol for which the model ids should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

tobjects(groups=None, protocol=None, model_ids=None, **kwargs)[source]

This function returns the File objects of the T-Norm models of the given groups for the given protocol and the given model ids.

Keyword parameters:

groupsstr or [str]

The groups of which the model ids should be returned. Usually, groups are one or more elements of (‘dev’, ‘eval’)

protocolstr

The protocol for which the model ids should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

zobjects(groups=None, protocol=None, **kwargs)[source]

This function returns the File objects of the Z-Norm impostor files of the given groups for the given protocol.

Keyword parameters:

groupsstr or [str]

The groups of which the model ids should be returned. Usually, groups are one or more elements of (‘dev’, ‘eval’)

protocolstr

The protocol for which the model ids should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

class bob.bio.video.database.YoutubeBioDatabase(original_directory=None, original_extension='.jpg', annotation_extension='.labeled_faces.txt', **kwargs)

Bases: bob.bio.base.database.ZTBioDatabase

YouTube Faces database implementation of bob.bio.base.database.ZTBioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to bob.db.youtube.Database database, for verification experiments (good to use in bob.bio framework).

annotations(myfile)[source]

Returns the annotations for the given File object, if available. You need to override this method in your high-level implementation. If your database does not have annotations, it should return None.

Parameters:

filebob.bio.base.database.BioFile

The file for which annotations should be returned.

Returns:

annotsdict or None

The annotations for the file, if available.

client_id_from_model_id(model_id, group='dev')[source]

Return the client id associated with the given model id. In this base class implementation, it is assumed that only one model is enrolled for each client and, thus, client id and model id are identical. All key word arguments are ignored. Please override this function in derived class implementations to change this behavior.

model_ids_with_protocol(groups = None, protocol = None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

property original_directory
tmodel_ids_with_protocol(protocol=None, groups=None, **kwargs)[source]

This function returns the ids of the T-Norm models of the given groups for the given protocol.

Keyword parameters:

groupsstr or [str]

The groups of which the model ids should be returned. Usually, groups are one or more elements of (‘dev’, ‘eval’)

protocolstr

The protocol for which the model ids should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

tobjects(groups=None, protocol=None, model_ids=None, **kwargs)[source]

This function returns the File objects of the T-Norm models of the given groups for the given protocol and the given model ids.

Keyword parameters:

groupsstr or [str]

The groups of which the model ids should be returned. Usually, groups are one or more elements of (‘dev’, ‘eval’)

protocolstr

The protocol for which the model ids should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

zobjects(groups=None, protocol=None, **kwargs)[source]

This function returns the File objects of the Z-Norm impostor files of the given groups for the given protocol.

Keyword parameters:

groupsstr or [str]

The groups of which the model ids should be returned. Usually, groups are one or more elements of (‘dev’, ‘eval’)

protocolstr

The protocol for which the model ids should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

class bob.bio.video.database.ReplayMobileVideoBioDatabase(original_directory=None, original_extension='.mov', annotation_directory=None, annotation_extension='.json', annotation_type='json', **kwargs)

Bases: bob.bio.base.database.BioDatabase

ReplayMobile database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to ReplayMobile database, for verification experiments (good to use in bob.bio.base framework).

property annotation_directory
property annotation_extension
property annotation_type
annotations(file)[source]

Returns the annotations for the given File object, if available. You need to override this method in your high-level implementation. If your database does not have annotations, it should return None.

Parameters:

filebob.bio.base.database.BioFile

The file for which annotations should be returned.

Returns:

annotsdict or None

The annotations for the file, if available.

arrange_by_client(files) → files_by_client[source]

Arranges the given list of files by client id. This function returns a list of lists of File’s.

Parameters:

filesbob.bio.base.database.BioFile

A list of files that should be split up by BioFile.client_id.

Returns:

files_by_client[[bob.bio.base.database.BioFile]]

The list of lists of files, where each sub-list groups the files with the same BioFile.client_id

groups()[source]

Returns the names of all registered groups in the database

Keyword parameters:

protocol: str

The protocol for which the groups should be retrieved. If you do not have protocols defined, just ignore this field.

model_ids_with_protocol(groups = None, protocol = None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

property original_directory
property original_extension
protocol_names()[source]

Returns all registered protocol names Here I am going to hack and double the number of protocols with -licit and -spoof. This is done for running vulnerability analysis