Python API for bob.ip.tensorflow_extractor

Classes

bob.ip.tensorflow_extractor.Extractor(…[, …]) Feature extractor using tensorflow
bob.ip.tensorflow_extractor.FaceNet([…]) Wrapper for the free FaceNet variant:

Detailed API

bob.ip.tensorflow_extractor.scratch_network(inputs, end_point='fc1', reuse=False)[source]
bob.ip.tensorflow_extractor.get_config()[source]

Returns a string containing the configuration information.

class bob.ip.tensorflow_extractor.DrGanMSUExtractor(model_path=None, image_size=[96, 96, 3])

Bases: object

Wrapper for the free DRGan by L.Tran @ MSU:

To use this class as a bob.bio.base extractor:

from bob.bio.base.extractor import Extractor
class DrGanMSUExtractorBioBase(DrGanMSUExtractor, Extractor):
    pass
extractor = DrGanMSUExtractorBioBase()

Parameters:

model_file:
Path to the model
image_size: list
The input image size (WxHxC)
__call__(image) → feature[source]

Extract features

Parameters:

image : 3D numpy.ndarray (floats)
The image to extract the features from.

Returns:

feature : 2D numpy.ndarray (floats)
The extracted features
static get_modelpath()[source]
static get_rcvariable()[source]
class bob.ip.tensorflow_extractor.Extractor(checkpoint_filename, input_tensor, graph, debug=False)

Bases: object

Feature extractor using tensorflow

__call__(data)[source]

Forward the data with the loaded neural network

Parameters:image (numpy.array) – Input Data
Returns:The features.
Return type:numpy.array
__init__(checkpoint_filename, input_tensor, graph, debug=False)[source]

Loads the tensorflow model

Parameters:
  • checkpoint_filename (str) – Path of your checkpoint. If the .meta file is providede the last checkpoint will be loaded.
  • model – input_tensor: tf.Tensor used as a data entrypoint. It can be a tf.placeholder, the result of tf.train.string_input_producer, etc
  • graph – A tf.Tensor containing the operations to be executed
class bob.ip.tensorflow_extractor.FaceNet(model_path=None, image_size=160, **kwargs)

Bases: object

Wrapper for the free FaceNet variant: https://github.com/davidsandberg/facenet

To use this class as a bob.bio.base extractor:

from bob.bio.base.extractor import Extractor
class FaceNetExtractor(FaceNet, Extractor):
    pass
extractor = FaceNetExtractor()

And for a preprocessor you can use:

from bob.bio.face.preprocessor import FaceCrop
# This is the size of the image that this model expects
CROPPED_IMAGE_HEIGHT = 160
CROPPED_IMAGE_WIDTH = 160
# eye positions for frontal images
RIGHT_EYE_POS = (46, 53)
LEFT_EYE_POS = (46, 107)
# Crops the face using eye annotations
preprocessor = FaceCrop(
    cropped_image_size=(CROPPED_IMAGE_HEIGHT, CROPPED_IMAGE_WIDTH),
    cropped_positions={'leye': LEFT_EYE_POS, 'reye': RIGHT_EYE_POS},
    color_channel='rgb'
)
static get_modelpath()[source]

Get default model path.

First we try the to search this path via Global Configuration System. If we can not find it, we set the path in the directory <project>/data

static get_rcvariable()[source]

Variable name used in the Bob Global Configuration System https://www.idiap.ch/software/bob/docs/bob/bob.extension/stable/rc.html#global-configuration-system

load_model()[source]