Python API for bob.ip.caffe_extractor

Classes

bob.ip.caffe_extractor.Extractor(…)

Feature extractor using caffe

bob.ip.caffe_extractor.VGGFace(end_cnn)

Extract features using the VGG model http://www.robots.ox.ac.uk/~vgg/software/vgg_face/

bob.ip.caffe_extractor.LightCNN([end_cnn, …])

Extract features using the Deep Face Representation model (LightCNN) https://github.com/AlfredXiangWu/face_verification_experiment and the paper.

Detailed API

bob.ip.caffe_extractor.download_file(url, out_file)[source]

Downloads a file from a given url

Parameters
  • url (str) – The url to download form.

  • out_file (str) – Where to save the file.

bob.ip.caffe_extractor.get_config()[source]

Returns a string containing the configuration information.

class bob.ip.caffe_extractor.Extractor(deploy_architecture, model, end_cnn)

Bases: object

Feature extractor using caffe

__call__(image)[source]

Forward the image with the loaded neural network

Parameters

image (numpy.array) – Input image

Returns

The features.

Return type

numpy.array

__init__(deploy_architecture, model, end_cnn)[source]

Loads the caffe model

Parameters
  • deploy_architecture (str) – The path of the prototxt architecture file used for deployment. The header must have the following format. input: “data” input_dim: 1 input_dim: c input_dim: w input_dim: h Where \(c\) is the number of channels, \(w\) is the width and $h$ is the height

  • model (str) – The path of the trained caffe model

  • end_cnn (str) – The name of the layer that you want to use as a feature

class bob.ip.caffe_extractor.LightCNN(end_cnn='eltwise_fc1', model_version='LightenedCNN_C')[source]

Bases: bob.ip.caffe_extractor.Extractor

Extract features using the Deep Face Representation model (LightCNN) https://github.com/AlfredXiangWu/face_verification_experiment and the paper:

@article{wulight,
  title={A Light CNN for Deep Face Representation with Noisy Labels},
  author={Wu, Xiang and He, Ran and Sun, Zhenan and Tan, Tieniu}
  journal={arXiv preprint arXiv:1511.02683},
  year={2015}
}

According to the issue #82, the feature layers are: The feature layer for A model is eltwise6 ant it is eltwise_fc1 for B and C model.

__init__(end_cnn='eltwise_fc1', model_version='LightenedCNN_C')[source]

LightCNN constructor

Parameters
  • end_cnn (str, optional) – The name of the layer that you want to use as a feature.

  • model_version (str, optional) – Which model to use.

__call__(image)[source]

Forward the image with the loaded neural network.

Parameters

image (numpy.array) – The image to be forwarded into the network. The image should be a 128x128 gray image with 40 pixels between two eye centers and 48 pixels between eye centers and mouth center. The image range should be [0, 1].

Returns

The extracted features.

Return type

numpy.array

static get_modelpath()[source]
static get_modelfolder()[source]
static get_protofolder()[source]
class bob.ip.caffe_extractor.VGGFace(end_cnn)

Bases: bob.ip.caffe_extractor.Extractor

Extract features using the VGG model http://www.robots.ox.ac.uk/~vgg/software/vgg_face/

__call__(image)[source]

Forward the image with the loaded neural network.

Parameters

image: Input image in RGB format

Returns

Features

__init__(end_cnn)[source]

VGG constructor

Parameters

end_cnn (str) – The name of the layer that you want to use as a feature

static get_vggpath()[source]