Python API for bob.ip.tensorflow_extractor

Classes

bob.ip.tensorflow_extractor.Extractor(…[, …]) Feature extractor using tensorflow
bob.ip.tensorflow_extractor.FaceNet([…]) Wrapper for the free FaceNet variant: https://github.com/davidsandberg/facenet
bob.ip.tensorflow_extractor.MTCNN([…]) MTCNN v1 wrapper.

Detailed API

bob.ip.tensorflow_extractor.scratch_network(inputs, end_point='fc1', reuse=False)[source]
bob.ip.tensorflow_extractor.get_config()[source]

Returns a string containing the configuration information.

class bob.ip.tensorflow_extractor.DrGanMSUExtractor(model_path=None, image_size=[96, 96, 3])

Bases: object

Wrapper for the free DRGan by L.Tran @ MSU:

To use this class as a bob.bio.base extractor:

from bob.bio.base.extractor import Extractor
class DrGanMSUExtractorBioBase(DrGanMSUExtractor, Extractor):
    pass
extractor = DrGanMSUExtractorBioBase()

Parameters:

model_file:
Path to the model
image_size: list
The input image size (WxHxC)
__call__(image) → feature[source]

Extract features

Parameters:

image : 3D numpy.ndarray (floats)
The image to extract the features from.

Returns:

feature : 2D numpy.ndarray (floats)
The extracted features
__init__(model_path=None, image_size=[96, 96, 3])[source]

Initialize self. See help(type(self)) for accurate signature.

static get_modelpath()[source]
static get_rcvariable()[source]
class bob.ip.tensorflow_extractor.Extractor(checkpoint_filename, input_tensor, graph, debug=False)

Bases: object

Feature extractor using tensorflow

__call__(data)[source]

Forward the data with the loaded neural network

Parameters:image (numpy.ndarray) – Input Data
Returns:The features.
Return type:numpy.ndarray
__init__(checkpoint_filename, input_tensor, graph, debug=False)[source]

Loads the tensorflow model

Parameters:
  • checkpoint_filename (str) – Path of your checkpoint. If the .meta file is providede the last checkpoint will be loaded.
  • model – input_tensor: tf.Tensor used as a data entrypoint. It can be a tf.placeholder, the result of tf.train.string_input_producer, etc
  • graph – A tf.Tensor containing the operations to be executed
class bob.ip.tensorflow_extractor.FaceNet(model_path=None, image_size=160, layer_name='embeddings:0', **kwargs)

Bases: object

Wrapper for the free FaceNet variant: https://github.com/davidsandberg/facenet

To use this class as a bob.bio.base extractor:

from bob.bio.base.extractor import Extractor
class FaceNetExtractor(FaceNet, Extractor):
    pass
extractor = FaceNetExtractor()

And for a preprocessor you can use:

from bob.bio.face.preprocessor import FaceCrop
# This is the size of the image that this model expects
CROPPED_IMAGE_HEIGHT = 160
CROPPED_IMAGE_WIDTH = 160
# eye positions for frontal images
RIGHT_EYE_POS = (46, 53)
LEFT_EYE_POS = (46, 107)
# Crops the face using eye annotations
preprocessor = FaceCrop(
    cropped_image_size=(CROPPED_IMAGE_HEIGHT, CROPPED_IMAGE_WIDTH),
    cropped_positions={'leye': LEFT_EYE_POS, 'reye': RIGHT_EYE_POS},
    color_channel='rgb'
)
__call__(img)[source]

Call self as a function.

__init__(model_path=None, image_size=160, layer_name='embeddings:0', **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

static get_modelpath()[source]

Get default model path.

First we try the to search this path via Global Configuration System. If we can not find it, we set the path in the directory <project>/data

static get_rcvariable()[source]

Variable name used in the Bob Global Configuration System https://www.idiap.ch/software/bob/docs/bob/bob.extension/stable/rc.html

load_model()[source]
class bob.ip.tensorflow_extractor.MTCNN(min_size=40, factor=0.709, thresholds=(0.6, 0.7, 0.7), model_path='/scratch/builds/bob/bob.ip.tensorflow_extractor/miniconda/conda-bld/bob.ip.tensorflow_extractor_1561142453282/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_place/lib/python3.6/site-packages/bob/ip/tensorflow_extractor/data/mtcnn/mtcnn.pb')

Bases: object

MTCNN v1 wrapper. See https://kpzhang93.github.io/MTCNN_face_detection_alignment/index.html for more details on MTCNN and see Face detection using MTCNN for an example code.

factor

Factor is a trade-off between performance and speed.

Type:float
min_size

Minimum face size to be detected.

Type:int
thresholds

thresholds are a trade-off between false positives and missed detections.

Type:list
__call__(img)[source]

Wrapper for the annotations method.

__init__(min_size=40, factor=0.709, thresholds=(0.6, 0.7, 0.7), model_path='/scratch/builds/bob/bob.ip.tensorflow_extractor/miniconda/conda-bld/bob.ip.tensorflow_extractor_1561142453282/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_place/lib/python3.6/site-packages/bob/ip/tensorflow_extractor/data/mtcnn/mtcnn.pb')[source]

Initialize self. See help(type(self)) for accurate signature.

annotations(img)[source]

Detects all faces in the image

Parameters:img (numpy.ndarray) – An RGB image in Bob format.
Returns:A list of annotations. Annotations are dictionaries that contain the following keys: topleft, bottomright, reye, leye, nose, mouthright, mouthleft, and quality.
Return type:list
detect(img)[source]

Detects all faces in the image.

Parameters:img (numpy.ndarray) – An RGB image in Bob format.
Returns:A tuple of boxes, probabilities, and landmarks.
Return type:tuple
class bob.ip.tensorflow_extractor.VGGFace(checkpoint_filename=None, debug=False)

Bases: bob.ip.tensorflow_extractor.Extractor

Extract features using the VGG model http://www.robots.ox.ac.uk/~vgg/software/vgg_face/

This was converted with the script https://github.com/tiagofrepereira2012

__call__(image)[source]

Forward the data with the loaded neural network

Parameters:image (numpy.ndarray) – Input Data
Returns:The features.
Return type:numpy.ndarray
__init__(checkpoint_filename=None, debug=False)[source]

Loads the tensorflow model

Parameters:
  • checkpoint_filename (str) – Path of your checkpoint. If the .meta file is providede the last checkpoint will be loaded.
  • model – input_tensor: tf.Tensor used as a data entrypoint. It can be a tf.placeholder, the result of tf.train.string_input_producer, etc
  • graph – A tf.Tensor containing the operations to be executed
static get_vggpath()[source]
bob.ip.tensorflow_extractor.vgg_16(inputs, reuse=None, dropout_keep_prob=0.5, weight_decay=0.0005, mode='train', **kwargs)[source]

Oxford Net VGG 16-Layers version E Example from tf-slim

Adapted from here. https://raw.githubusercontent.com/tensorflow/models/master/research/slim/nets/vgg.py

Parameters:

inputs: a 4-D tensor of size [batch_size, height, width, 3].

reuse: whether or not the network and its variables should be reused. To be
able to reuse ‘scope’ must be given.
mode:
Estimator mode keys