Python API for bob.ip.tensorflow_extractor

Classes

bob.ip.tensorflow_extractor.Extractor(…[, …])

Feature extractor using tensorflow

bob.ip.tensorflow_extractor.FaceNet([…])

Wrapper for the free FaceNet variant: https://github.com/davidsandberg/facenet

bob.ip.tensorflow_extractor.MTCNN([…])

MTCNN v1 wrapper.

Detailed API

bob.ip.tensorflow_extractor.get_config()[source]

Returns a string containing the configuration information.

class bob.ip.tensorflow_extractor.FaceNet(model_path=None, image_size=160, layer_name='embeddings:0', **kwargs)

Bases: object

Wrapper for the free FaceNet variant: https://github.com/davidsandberg/facenet

To use this class as a bob.bio.base extractor:

from bob.bio.base.extractor import Extractor
class FaceNetExtractor(FaceNet, Extractor):
    pass
extractor = FaceNetExtractor()

And for a preprocessor you can use:

from bob.bio.face.preprocessor import FaceCrop
# This is the size of the image that this model expects
CROPPED_IMAGE_HEIGHT = 160
CROPPED_IMAGE_WIDTH = 160
# eye positions for frontal images
RIGHT_EYE_POS = (46, 53)
LEFT_EYE_POS = (46, 107)
# Crops the face using eye annotations
preprocessor = FaceCrop(
    cropped_image_size=(CROPPED_IMAGE_HEIGHT, CROPPED_IMAGE_WIDTH),
    cropped_positions={'leye': LEFT_EYE_POS, 'reye': RIGHT_EYE_POS},
    color_channel='rgb'
)
__call__(img)[source]

Call self as a function.

__init__(model_path=None, image_size=160, layer_name='embeddings:0', **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

static get_modelpath()[source]

Get default model path.

First we try the to search this path via Global Configuration System. If we can not find it, we set the path in the directory <project>/data

load_model()[source]
class bob.ip.tensorflow_extractor.MTCNN(min_size=40, factor=0.709, thresholds=(0.6, 0.7, 0.7), model_path='/scratch/builds/bob/bob.ip.tensorflow_extractor/miniconda/conda-bld/bob.ip.tensorflow_extractor_1601738085240/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/lib/python3.7/site-packages/bob/ip/tensorflow_extractor/data/mtcnn/mtcnn.pb')

Bases: object

MTCNN v1 wrapper. See https://kpzhang93.github.io/MTCNN_face_detection_alignment/index.html for more details on MTCNN and see Face detection using MTCNN for an example code.

factor

Factor is a trade-off between performance and speed.

Type

float

min_size

Minimum face size to be detected.

Type

int

thresholds

thresholds are a trade-off between false positives and missed detections.

Type

list

__call__(img)[source]

Wrapper for the annotations method.

__init__(min_size=40, factor=0.709, thresholds=(0.6, 0.7, 0.7), model_path='/scratch/builds/bob/bob.ip.tensorflow_extractor/miniconda/conda-bld/bob.ip.tensorflow_extractor_1601738085240/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/lib/python3.7/site-packages/bob/ip/tensorflow_extractor/data/mtcnn/mtcnn.pb')[source]

Initialize self. See help(type(self)) for accurate signature.

annotations(img)[source]

Detects all faces in the image

Parameters

img (numpy.ndarray) – An RGB image in Bob format.

Returns

A list of annotations. Annotations are dictionaries that contain the following keys: topleft, bottomright, reye, leye, nose, mouthright, mouthleft, and quality.

Return type

list

detect(img)[source]

Detects all faces in the image.

Parameters

img (numpy.ndarray) – An RGB image in Bob format.

Returns

A tuple of boxes, probabilities, and landmarks.

Return type

tuple

class bob.ip.tensorflow_extractor.Extractor(checkpoint_filename, input_tensor, graph, debug=False)

Bases: object

Feature extractor using tensorflow

__call__(data)[source]

Forward the data with the loaded neural network

Parameters

image (numpy.ndarray) – Input Data

Returns

The features.

Return type

numpy.ndarray

__init__(checkpoint_filename, input_tensor, graph, debug=False)[source]

Loads the tensorflow model

Parameters
  • checkpoint_filename (str) – Path of your checkpoint. If the .meta file is providede the last checkpoint will be loaded.

  • model – input_tensor: tf.Tensor used as a data entrypoint. It can be a tf.placeholder, the result of tf.train.string_input_producer, etc

  • graph – A tf.Tensor containing the operations to be executed