Tools implemented in bob.bio.video¶
Summary¶
|
Returns indices of the frames to be selected given the parameters. |
|
A memory efficient class to load only select video frames. |
|
|
Wrapper class to run image preprocessing algorithms on video data. |
|
The base class for video annotators. |
|
|
Annotates video files using the provided image annotator. |
|
A fail-safe video annotator. |
|
This function takes a sklearn.Pipeline and wraps each estimator inside of it with |
Databases¶
|
This package contains the access API and descriptions for the YouTube Faces database. |
Details¶
- class bob.bio.video.VideoAsArray(path, selection_style=None, max_number_of_frames=None, step_size=None, transform=None, **kwargs)¶
Bases:
objectA memory efficient class to load only select video frames. It also supports efficient conversion to dask arrays.
- class bob.bio.video.VideoLikeContainer(data, indices, **kwargs)¶
Bases:
object- property dtype¶
- property ndim¶
- property shape¶
- bob.bio.video.select_frames(count, max_number_of_frames=None, selection_style=None, step_size=None)[source]¶
Returns indices of the frames to be selected given the parameters.
Different selection styles are supported:
first : The first frames are selected
spread : Frames are selected to be taken from the whole video with equal spaces in between.
step : Frames are selected every
step_sizeindices, starting atstep_size/2Think twice if you want to have that when giving FrameContainer data!all : All frames are selected unconditionally.
- Parameters:
- Returns:
A range of frames to be selected.
- Return type:
- Raises:
ValueError – If
selection_styleis not one of the supported ones.
- bob.bio.video.video_wrap_skpipeline(sk_pipeline)[source]¶
This function takes a sklearn.Pipeline and wraps each estimator inside of it with
bob.bio.video.transformer.VideoWrapper
- bob.bio.video.annotator.normalize_annotations(annotations, validator, max_age=-1)[source]¶
Normalizes the annotations of one video sequence. It fills the annotations for frames from previous ones if the annotation for the current frame is not valid.
- Parameters:
annotations (OrderedDict) – A dict of dict where the keys to the first dict are frame indices as strings (starting from 0). The inside dicts contain annotations for that frame. The dictionary needs to be an ordered dict in order for this to work.
validator (
callable) – Takes a dict (annotations) and returns True if the annotations are valid. This can be a check based on minimal face size for example: seebob.bio.face.annotator.min_face_size_validator.max_age (
int, optional) – An integer indicating for a how many frames a detected face is valid if no detection occurs after such frame. A value of -1 == forever
- Yields:
str – The index of frame.
dict – The corrected annotations of the frame.
- class bob.bio.video.annotator.Base[source]¶
Bases:
AnnotatorThe base class for video annotators.
- static frame_ids_and_frames(frames)[source]¶
Takes the frames and yields frame_ids and frames.
- Parameters:
frames (
bob.bio.video.VideoLikeContainerorbob.bio.video.VideoAsArrayornumpy.array) – The frames of the video file.- Yields:
frame_id (str) – A string that represents the frame id.
frame (
numpy.array) – The frame of the video file as an array.
- annotate(frames, **kwargs)[source]¶
Annotates videos.
- Parameters:
frames (
bob.bio.video.VideoLikeContainerorbob.bio.video.VideoAsArrayornumpy.array) – The frames of the video file.**kwargs – Extra arguments that annotators may need.
- Returns:
A dictionary where its key is the frame id as a string and its value is a dictionary that are the annotations for that frame.
- Return type:
OrderedDict
Note
You can use the
Base.frame_ids_and_framesfunctions to normalize the input in your implementation.
- transform(samples)[source]¶
Takes a batch of data and annotates them.
Each
kwargsvalue is a list of parameters, with each element of those lists corresponding to each element ofsamples(for example: with[s1, s2, ...]assamples,kwargs['annotations']should contain[{<s1_annotations>}, {<s2_annotations>}, ...]).
- set_transform_request(*, samples: bool | None | str = '$UNCHANGED$') Base¶
Request metadata passed to the
transformmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed totransformif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it totransform.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.
- class bob.bio.video.annotator.FailSafeVideo(annotators, max_age=15, validator=None, **kwargs)[source]¶
Bases:
BaseA fail-safe video annotator. It tries several annotators in order and tries the next one if the previous one fails. However, the difference between this annotator and
bob.bio.base.annotator.FailSafeis that this one tries to use annotations from older frames (if valid) before trying the next annotator.Warning
You must be careful in using this annotator since different annotators could have different results. For example the bounding box of one annotator be totally different from another annotator.
- Parameters:
annotators (list) – A list of annotators to try.
max_age (int) – The maximum number of frames that an annotation is valid for next frames. This value should be positive. If you want to set max_age to infinite, then you can use the
bob.bio.video.annotator.Wrapperinstead.validator (
callable) – A function that takes the annotations of a frame and validates it.
Please see
Basefor more accepted parameters.- annotate(frames)[source]¶
See
Base.annotate
- set_transform_request(*, samples: bool | None | str = '$UNCHANGED$') FailSafeVideo¶
Request metadata passed to the
transformmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed totransformif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it totransform.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.
- class bob.bio.video.annotator.Wrapper(annotator, normalize=False, validator=None, max_age=-1, **kwargs)[source]¶
Bases:
BaseAnnotates video files using the provided image annotator. See the documentation of
Basetoo.- Parameters:
annotator (
bob.bio.base.annotator.Annotatoror str) – The image annotator to be used. The annotator could also be the name of a bob.bio.annotator resource which will be loaded.max_age (int) – see
normalize_annotations.normalize (bool) – If True, it will normalize annotations using
normalize_annotationsvalidator (object) – See
normalize_annotationsandbob.bio.face.annotator.min_face_size_validatorfor one example.
Please see
Basefor more accepted parameters.Warning
You should only set
normalizeto True only if you are annotating all frames of the video file.- annotate(frames)[source]¶
See
Base.annotate
- set_transform_request(*, samples: bool | None | str = '$UNCHANGED$') Wrapper¶
Request metadata passed to the
transformmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed totransformif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it totransform.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.
- class bob.bio.video.transformer.VideoWrapper(estimator, **kwargs)[source]¶
Bases:
TransformerMixin,BaseEstimatorWrapper class to run image preprocessing algorithms on video data.
Parameters:
- estimatorstr or
sklearn.base.BaseEstimatorinstance The transformer to be used to preprocess the frames.
- set_transform_request(*, videos: bool | None | str = '$UNCHANGED$') VideoWrapper¶
Request metadata passed to the
transformmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed totransformif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it totransform.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.
- estimatorstr or
- class bob.bio.video.database.VideoBioFile(client_id, path, file_id, original_directory=None, original_extension='.avi', annotation_directory=None, annotation_extension=None, annotation_type=None, selection_style=None, max_number_of_frames=None, step_size=None, **kwargs)¶
Bases:
BioFile- load()[source]¶
Loads the data at the specified location and using the given extension. Override it if you need to load differently.
- Parameters:
- Returns:
The loaded data (normally
numpy.ndarray).- Return type:
- class bob.bio.video.database.YoutubeDatabase(protocol, annotation_type='bounding-box', fixed_positions=None, original_directory='', extension='.jpg', annotation_extension='.labeled_faces.txt', frame_selector=None)¶
Bases:
CSVDatabaseThis package contains the access API and descriptions for the YouTube Faces database. It only contains the Bob accessor methods to use the DB directly from python, with our certified protocols. The actual raw data for the YouTube Faces database should be downloaded from the original URL (though we were not able to contact the corresponding Professor).
Warning
To use this dataset protocol, you need to have the original files of the YOUTUBE datasets. Once you have it downloaded, please run the following command to set the path for Bob
bob config set bob.bio.face.youtube.directory [YOUTUBE PATH]
In this interface we implement the 10 original protocols of the YouTube Faces database (‘fold1’, ‘fold2’, ‘fold3’, ‘fold4’, ‘fold5’, ‘fold6’, ‘fold7’, ‘fold8’, ‘fold9’, ‘fold10’)
The code below allows you to fetch the gallery and probes of the “fold0” protocol.
>>> from bob.bio.video.database import YoutubeDatabase >>> youtube = YoutubeDatabase(protocol="fold0") >>> >>> # Fetching the gallery >>> references = youtube.references() >>> # Fetching the probes >>> probes = youtube.probes()
- Parameters:
protocol (str) – One of the Youtube above mentioned protocols
annotation_type (str) – One of the supported annotation types
original_directory (str) – Original directory
extension (str) – Default file extension
annotation_extension (str)
frame_selector – Pointer to a function that does frame selection.
- category = 'video'¶
- dataset_protocols_hash = '51c1fb2a'¶
- dataset_protocols_name = 'youtube.tar.gz'¶
- dataset_protocols_urls = ['https://www.idiap.ch/software/bob/databases/latest/video/youtube-51c1fb2a.tar.gz', 'http://www.idiap.ch/software/bob/databases/latest/video/youtube-51c1fb2a.tar.gz']¶
- name = 'youtube'¶