bob.ip.common.data.utils

Common utilities

Functions

invert_mode1_image(img)

Inverts a binary PIL image (mode == "1")

overlayed_bbox_image(img, box[, box_color, ...])

Creates an image showing existing bounding boxes

overlayed_image(img, label[, mask, ...])

Creates an image showing existing labels and masko

subtract_mode1_images(img1, img2)

Returns a new image that represents img1 - img2

Classes

SSLDataset(labelled, unlabelled)

PyTorch dataset wrapper around labelled and unlabelled sample lists

SampleListDataset(samples[, transforms])

PyTorch dataset wrapper around Sample lists

SampleListDetectionDataset(samples[, transforms])

PyTorch dataset wrapper around Sample lists

bob.ip.common.data.utils.invert_mode1_image(img)[source]

Inverts a binary PIL image (mode == "1")

bob.ip.common.data.utils.subtract_mode1_images(img1, img2)[source]

Returns a new image that represents img1 - img2

bob.ip.common.data.utils.overlayed_image(img, label, mask=None, label_color=(0, 255, 0), mask_color=(0, 0, 255), alpha=0.4)[source]

Creates an image showing existing labels and masko

This function creates a new representation of the input image img overlaying a green mask for labelled objects, and a red mask for parts of the image that should be ignored (negative mask). By looking at this representation, it shall be possible to verify if the dataset/loader is yielding images correctly.

Parameters
  • img (PIL.Image.Image) – An RGB PIL image that represents the original image for analysis

  • label (PIL.Image.Image) – A PIL image in any mode that represents the labelled elements in the image. In case of images in mode “L” or “1”, white pixels represent the labelled object. Black-er pixels represent background.

  • mask (py:class:PIL.Image.Image, Optional) – A PIL image in mode “1” that represents the mask for the image. White pixels indicate where content should be used, black pixels, content to be ignored.

  • label_color (py:class:tuple, Optional) – A tuple with three integer entries indicating the RGB color to be used for labels. Only used if label.mode is “1” or “L”.

  • mask_color (py:class:tuple, Optional) – A tuple with three integer entries indicating the RGB color to be used for the mask-negative (black parts in the original mask).

  • alpha (py:class:float, Optional) – A float that indicates how much of blending should be performed between the label, mask and the original image.

Returns

image – A new image overlaying the original image, object labels (in green) and what is to be considered parts to be masked-out (i.e. a representation of a negative of the mask).

Return type

PIL.Image.Image

bob.ip.common.data.utils.overlayed_bbox_image(img, box, box_color=(0, 255, 0), width=1)[source]

Creates an image showing existing bounding boxes

This function creates a new representation of the input image img overlaying a green bounding box for labelled objects. By looking at this representation, it shall be possible to verify if the dataset/loader is yielding images correctly.

Parameters
  • img (PIL.Image.Image) – An RGB PIL image that represents the original image for analysis

  • box (list) – A list of bounding box coordinates.

  • box_color (py:class:tuple, Optional) – A tuple with three integer entries indicating the RGB color to be used for bounding box.

  • width (py:class:int, Optional) – An integer indicating the size of the rectangle line, in pixels.

Returns

image – A new image overlaying the original image, object labels (in green).

Return type

PIL.Image.Image

class bob.ip.common.data.utils.SampleListDataset(samples, transforms=[])[source]

Bases: Dataset

PyTorch dataset wrapper around Sample lists

A transform object can be passed that will be applied to the image, ground truth and mask (if present).

It supports indexing such that dataset[i] can be used to get the i-th sample.

Parameters
property transforms
copy(transforms=None)[source]

Returns a deep copy of itself, optionally resetting transforms

Parameters

transforms (list, Optional) – An optional list of transforms to set in the copy. If not specified, use self.transforms.

keys()[source]

Generator producing all keys for all samples

all_keys_match(other)[source]

Compares all keys to other, return True if all match

class bob.ip.common.data.utils.SampleListDetectionDataset(samples, transforms=[])[source]

Bases: Dataset

PyTorch dataset wrapper around Sample lists

A transform object can be passed that will be applied to the image, ground truth and mask (if present).

It supports indexing such that dataset[i] can be used to get the i-th sample.

Parameters
property transforms
copy(transforms=None)[source]

Returns a deep copy of itself, optionally resetting transforms

Parameters

transforms (list, Optional) – An optional list of transforms to set in the copy. If not specified, use self.transforms.

keys()[source]

Generator producing all keys for all samples

all_keys_match(other)[source]

Compares all keys to other, return True if all match

class bob.ip.common.data.utils.SSLDataset(labelled, unlabelled)[source]

Bases: Dataset

PyTorch dataset wrapper around labelled and unlabelled sample lists

Yields elements of the form:

[key, image, ground-truth, [mask,] unlabelled-key, unlabelled-image]

The size of the dataset is the same as the labelled dataset.

Indexing works by selecting the right element on the labelled dataset, and randomly picking another one from the unlabelled dataset

Parameters
  • labelled (torch.utils.data.Dataset) – Labelled dataset (must have “mask” and “label” entries for every sample)

  • unlabelled (torch.utils.data.Dataset) – Unlabelled dataset (may have “mask” and “label” entries for every sample, but are ignored)

keys()[source]

Generator producing all keys for all samples

all_keys_match(other)[source]

Compares all keys to other, return True if all match