# Python API¶

This section contains a listing of all functionality available on this library which can be used for vein experiments.

## Database Interfaces¶

### Common Utilities¶

Database definitions for Vein Recognition

class bob.bio.vein.database.AnnotatedArray[source]

Bases: numpy.ndarray

Defines a numpy array subclass that can carry its own metadata

### Vera Fingervein Database¶

class bob.bio.vein.database.verafinger.File(f)[source]

Implements extra properties of vein files for the Vera Fingervein database

Parameters: f (object) – Low-level file (or sample) object that is kept inside
load(*args, **kwargs)[source]

class bob.bio.vein.database.verafinger.Database(**kwargs)[source]

Implements verification API for querying Vera Fingervein database.

groups()[source]
client_id_from_model_id(model_id, group='dev')[source]

Required as model_id != client_id on this database

model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
annotations(file)[source]

### UTFVP Database¶

class bob.bio.vein.database.utfvp.File(f)[source]

Implements extra properties of vein files for the UTFVP database

Parameters: f (object) – Low-level file (or sample) object that is kept inside
load(*args, **kwargs)[source]

class bob.bio.vein.database.utfvp.Database(**kwargs)[source]

Implements verification API for querying UTFVP database.

groups()[source]
client_id_from_model_id(model_id, group='dev')[source]

Required as model_id != client_id on this database

model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
annotations(file)[source]

### 3D Fingervein Database¶

class bob.bio.vein.database.fv3d.File(f)[source]

Implements extra properties of vein files for the 3D Fingervein database

Parameters: f (object) – Low-level file (or sample) object that is kept inside
load(*args, **kwargs)[source]

class bob.bio.vein.database.fv3d.Database(**kwargs)[source]

Implements verification API for querying the 3D Fingervein database.

groups()[source]
client_id_from_model_id(model_id, group='dev')[source]

Required as model_id != client_id on this database

model_ids_with_protocol(groups=None, protocol=None, **kwargs)[source]
objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]
annotations(file)[source]

## Pre-processors¶

class bob.bio.vein.preprocessor.AnnotatedRoIMask

Devises the mask from the annotated RoI

class bob.bio.vein.preprocessor.Cropper

Bases: object

This is the base class for all croppers

It defines the minimum requirements for all derived cropper classes.

class bob.bio.vein.preprocessor.Filter

Bases: object

Objects of this class filter the input image

class bob.bio.vein.preprocessor.FixedCrop(top=0, bottom=0, left=0, right=0)

Implements cropping using a fixed suppression of border pixels

The defaults supress no lines from the image and returns an image like the original. If an bob.bio.vein.database.AnnotatedArray is passed, then we also check for its .metadata['roi'] component and correct it so that annotated RoI points are consistent on the cropped image.

Note

Before choosing values, note you’re responsible for knowing what is the orientation of images fed into this cropper.

Parameters: top (int, optional) – Number of lines to suppress from the top of the image. The top of the image corresponds to y = 0. bottom (int, optional) – Number of lines to suppress from the bottom of the image. The bottom of the image corresponds to y = height. left (int, optional) – Number of lines to suppress from the left of the image. The left of the image corresponds to x = 0. right (int, optional) – Number of lines to suppress from the right of the image. The right of the image corresponds to x = width.
class bob.bio.vein.preprocessor.FixedMask(top=0, bottom=0, left=0, right=0)

Implements masking using a fixed suppression of border pixels

The defaults mask no lines from the image and returns a mask of the same size of the original image where all values are True.

Note

Before choosing values, note you’re responsible for knowing what is the orientation of images fed into this masker.

Parameters: top (int, optional) – Number of lines to suppress from the top of the image. The top of the image corresponds to y = 0. bottom (int, optional) – Number of lines to suppress from the bottom of the image. The bottom of the image corresponds to y = height. left (int, optional) – Number of lines to suppress from the left of the image. The left of the image corresponds to x = 0. right (int, optional) – Number of lines to suppress from the right of the image. The right of the image corresponds to x = width.
class bob.bio.vein.preprocessor.HistogramEqualization

Applies histogram equalization on the input image inside the mask.

In this implementation, only the pixels that lie inside the mask will be used to calculate the histogram equalization parameters. Because of this particularity, we don’t use Bob’s implementation for histogram equalization and have one based exclusively on scikit-image.

class bob.bio.vein.preprocessor.HuangNormalization(padding_width=5, padding_constant=51)

Simple finger normalization from Huang et. al

Based on B. Huang, Y. Dai, R. Li, D. Tang and W. Li, Finger-vein authentication based on wide line detector and pattern normalization, Proceedings on 20th International Conference on Pattern Recognition (ICPR), 2010.

This implementation aligns the finger to the centre of the image using an affine transformation. Elliptic projection which is described in the referenced paper is not included.

In order to defined the affine transformation to be performed, the algorithm first calculates the center for each edge (column wise) and calculates the best linear fit parameters for a straight line passing through those points.

class bob.bio.vein.preprocessor.KonoMask(sigma=5, padder=<bob.bio.vein.preprocessor.Padder object>)

Estimates the finger region given an input NIR image using Kono et al.

This method is based on the work of M. Kono, H. Ueki and S. Umemura. Near-infrared finger vein patterns for personal identification, Applied Optics, Vol. 41, Issue 35, pp. 7429-7436 (2002).

Parameters: sigma (float, optional) – The standard deviation of the gaussian blur filter to apply for low-passing the input image (background extraction). Defaults to 5. padder (Padder, optional) – If passed, will pad the image before evaluating the mask. The returned value will have the padding removed and is, therefore, of the exact size of the input image.
class bob.bio.vein.preprocessor.LeeMask(filter_height=4, filter_width=40, padder=<bob.bio.vein.preprocessor.Padder object>)

Estimates the finger region given an input NIR image using Lee et al.

This method is based on the work of Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction, E.C. Lee, H.C. Lee and K.R. Park, International Journal of Imaging Systems and Technology, Volume 19, Issue 3, September 2009, Pages 175–178, doi: 10.1002/ima.20193

This code is based on the Matlab implementation by Bram Ton, available at:

https://nl.mathworks.com/matlabcentral/fileexchange/35752-finger-region-localisation/content/lee_region.m

In this method, we calculate the mask of the finger independently for each column of the input image. Firstly, the image is convolved with a [1,-1] filter of size (self.filter_height, self.filter_width). Then, the upper and lower parts of the resulting filtered image are separated. The location of the maxima in the upper part is located. The same goes for the location of the minima in the lower part. The mask is then calculated, per column, by considering it starts in the point where the maxima is in the upper part and goes up to the point where the minima is detected on the lower part.

Parameters: filter_height (int, optional) – Height of contour mask in pixels, must be an even number filter_width (int, optional) – Width of the contour mask in pixels
class bob.bio.vein.preprocessor.Masker

Bases: object

This is the base class for all maskers

It defines the minimum requirements for all derived masker classes.

class bob.bio.vein.preprocessor.NoCrop

Convenience: same as FixedCrop()

class bob.bio.vein.preprocessor.NoFilter

Applies no filtering on the input image, returning it without changes

class bob.bio.vein.preprocessor.NoMask

class bob.bio.vein.preprocessor.NoNormalization

Trivial implementation with no normalization

class bob.bio.vein.preprocessor.Normalizer

Bases: object

Objects of this class normalize the input image orientation and scale

class bob.bio.vein.preprocessor.Padder(padding_width=5, padding_constant=51)

Bases: object

A class that pads the input image returning a new object

Parameters: padding_width (int, optional) – How much padding (in pixels) to add around the borders of the input image. We normally always keep this value on its default (5 pixels). This parameter is always used before normalizing the finger orientation. padding_constant (int, optional) – What is the value of the pixels added to the padding. This number should be a value between 0 and 255. (From Pedro Tome: for UTFVP (high-quality samples), use 0. For the VERA Fingervein database (low-quality samples), use 51 (that corresponds to 0.2 in a float image with values between 0 and 1). This parameter is always used before normalizing the finger orientation.
class bob.bio.vein.preprocessor.Preprocessor(crop, mask, normalize, filter, **kwargs)

Extracts the mask and pre-processes fingervein images.

In this implementation, the finger image is (in this order):

1. The image is pre-cropped to remove obvious non-finger image parts
2. The mask is extrapolated from the image using one of our Masker’s concrete implementations
3. The image is normalized with one of our Normalizer’s
4. The image is filtered with one of our Filter’s
Parameters: crop (Cropper) – An object that will perform pre-cropping on the input image before a mask can be estimated. It removes parts of the image which are surely not part of the finger region you’ll want to consider for the next steps. mask (Masker) – An object representing a Masker instance which will extrapolate the mask from the input image. normalize (Normalizer) – An object representing a Normalizer instance which will normalize the input image and its mask returning a new image mask pair. filter (Filter) – An object representing a Filter instance will will filter the input image and return a new filtered image. The filter instance also receives the extrapolated mask so it can, if desired, only apply the filtering operation where the mask has a value of True
read_data(filename)[source]

Overrides the default method implementation to handle our tuple

write_data(data, filename)[source]

Overrides the default method implementation to handle our tuple

class bob.bio.vein.preprocessor.TomesLeeMask(filter_height=4, filter_width=40, padder=<bob.bio.vein.preprocessor.Padder object>)

Estimates the finger region given an input NIR image using Lee et al.

This method is based on the work of Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction, E.C. Lee, H.C. Lee and K.R. Park, International Journal of Imaging Systems and Technology, Volume 19, Issue 3, September 2009, Pages 175–178, doi: 10.1002/ima.20193

This code is a variant of the Matlab implementation by Bram Ton, available at:

https://nl.mathworks.com/matlabcentral/fileexchange/35752-finger-region-localisation/content/lee_region.m

In this variant from Pedro Tome, the technique of filtering the image with a horizontal filter is also applied on the vertical axis. The objective is to find better limits on the horizontal axis in case finger images show the finger tip. If that is not your case, you may use the original variant LeeMask above.

Parameters: filter_height (int, optional) – Height of contour mask in pixels, must be an even number filter_width (int, optional) – Width of the contour mask in pixels
class bob.bio.vein.preprocessor.WatershedMask(model, foreground_threshold, background_threshold)

Estimates the finger region given an input NIR image using Watershedding

This method uses the Watershedding Morphological Algorithm <https://en.wikipedia.org/wiki/Watershed_(image_processing)> for determining the finger mask given an input image.

The masker works first by determining image edges using a simple 2-D Sobel filter. The next step is to determine markers in the image for both the finger region and background. Markers are set on the image by using a pre-trained feed-forward neural network model (multi-layer perceptron or MLP) learned from existing annotations. The model is trained in a separate program and operates on 3x3 regions around the pixel to be predicted for finger/background. The (y,x) location also is provided as input to the classifier. The feature vector is then composed of 9 pixel values plus the y and x (normalized) coordinates of the pixel. The network then provides a prediction that depends on these input parameters. The closer the output is to 1.0, the more likely it is from within the finger region.

Values output by the network are thresholded in order to remove uncertain markers. The threshold parameter is configurable.

A series of morphological opening operations is used to, given the neural net markers, remove noise before watershedding the edges from the Sobel’ed original image.

Parameters: model (str) – Path to the model file to be used for generating finger/background markers. This model should be pre-trained using a separate program. foreground_threshold (float) – Threshold given a logistic regression output (interval $$[0, 1]$$) for which we consider finger markers provided by the network. The higher the value, the more selective the algorithm will be and the less (foreground) markers will be used from the network selection. This value should be a floating point number in the open-set interval $$(0.0, 1.0)$$. If background_threshold is not set, values for background selection will be set to $$1.0-T$$, where T represents this threshold. background_threshold (float) – Threshold given a logistic regression output (interval $$[0, 1]$$) for which we consider finger markers provided by the network. The smaller the value, the more selective the algorithm will be and the less (background) markers will be used from the network selection. This value should be a floating point number in the open-set interval $$(0.0, 1.0)$$. If foreground_threshold is not set, values for foreground selection will be set to $$1.0-T$$, where T represents this threshold.
run(image)[source]

Fully preprocesses the input image and returns intermediate results

Parameters: image (numpy.ndarray) – A 2D numpy array of type uint8 with the input image A 2D numpy array of type uint8 with the markers for foreground and background, selected by the neural network modelnumpy.ndarray: A 2D numpy array of type float64 with the edges used to define the borders of the watermasking process numpy.ndarray: A 2D numpy array of type boolean with the caculated mask. True values correspond to regions where the finger is located numpy.ndarray

## Pre-processor utilities¶

Utilities for preprocessing vein imagery

bob.bio.vein.preprocessor.utils.assert_points(area, points)[source]

Checks all points fall within the determined shape region, inclusively

This assertion function, test all points given in points fall within a certain area provided in area.

Parameters: area (tuple) – A tuple containing the size of the limiting area where the points should all be in. points (numpy.ndarray) – A 2D numpy ndarray with any number of rows (points) and 2 columns (representing y and x coordinates respectively), or any type convertible to this format. This array contains the points that will be checked for conformity. In case one of the points doesn’t fall into the determined area an assertion is raised. AssertionError – In case one of the input points does not fall within the area defined.
bob.bio.vein.preprocessor.utils.fix_points(area, points)[source]

Checks/fixes all points so they fall within the determined shape region

Points which are lying outside the determined area will be brought into the area by moving the offending coordinate to the border of the said area.

Parameters: area (tuple) – A tuple containing the size of the limiting area where the points should all be in. points (numpy.ndarray) – A 2D numpy.ndarray with any number of rows (points) and 2 columns (representing y and x coordinates respectively), or any type convertible to this format. This array contains the points that will be checked/fixed for conformity. In case one of the points doesn’t fall into the determined area, it is silently corrected so it does. A new array of points with corrected coordinates numpy.ndarray
bob.bio.vein.preprocessor.utils.poly_to_mask(shape, points)[source]

Generates a binary mask from a set of 2D points

Parameters: shape (tuple) – A tuple containing the size of the output mask in height and width, for Bob compatibility (y, x). points (list) – A list of tuples containing the polygon points that form a region on the target mask. A line connecting these points will be drawn and all the points in the mask that fall on or within the polygon line, will be set to True. All other points will have a value of False. A 2D numpy ndarray with dtype=bool with the mask generated with the determined shape, using the points for the polygon. numpy.ndarray
bob.bio.vein.preprocessor.utils.mask_to_image(mask, dtype=<class 'numpy.uint8'>)[source]

Converts a binary (boolean) mask into an integer or floating-point image

This function converts a boolean binary mask into an image of the desired type by setting the points where False is set to 0 and points where True is set to the most adequate value taking into consideration the destination data type dtype. Here are support types and their ranges:

• numpy.uint8: [0, (2^8)-1]
• numpy.uint16: [0, (2^16)-1]
• numpy.uint32: [0, (2^32)-1]
• numpy.uint64: [0, (2^64)-1]
• numpy.float32: [0, 1.0] (fixed)
• numpy.float64: [0, 1.0] (fixed)
• numpy.float128: [0, 1.0] (fixed)

All other types are currently unsupported.

Parameters: mask (numpy.ndarray) – A 2D numpy ndarray with boolean data type, containing the mask that will be converted into an image. dtype (numpy.dtype) – A valid numpy data-type from the list above for the resulting image With the designated data type, containing the binary image formed from the mask. numpy.ndarray TypeError – If the type is not supported by this function
bob.bio.vein.preprocessor.utils.show_image(image)[source]

Shows a single image using PIL.Image.Image.show()

Warning

This function opens a new window. You must be operating interactively in a windowing system for it to work properly.

Parameters: image (numpy.ndarray) – A 2D numpy.ndarray compose of 8-bit unsigned integers containing the original image
bob.bio.vein.preprocessor.utils.draw_mask_over_image(image, mask, color='red')[source]

Plots the mask over the image of a finger, for debugging purposes

Parameters: image (numpy.ndarray) – A 2D numpy.ndarray compose of 8-bit unsigned integers containing the original image mask (numpy.ndarray) – A 2D numpy.ndarray compose of boolean values containing the calculated mask An image in PIL format PIL.Image
bob.bio.vein.preprocessor.utils.show_mask_over_image(image, mask, color='red')[source]

Plots the mask over the image of a finger using PIL.Image.Image.show()

Warning

This function opens a new window. You must be operating interactively in a windowing system for it to work properly.

Parameters: image (numpy.ndarray) – A 2D numpy.ndarray compose of 8-bit unsigned integers containing the original image mask (numpy.ndarray) – A 2D numpy.ndarray compose of boolean values containing the calculated mask
bob.bio.vein.preprocessor.utils.jaccard_index(a, b)[source]

Calculates the intersection over union for two masks

This function calculates the Jaccard index:

$\begin{split}J(A,B) &= \frac{|A \cap B|}{|A \cup B|} \\ &= \frac{|A \cap B|}{|A|+|B|-|A \cup B|}\end{split}$
Parameters: a (numpy.ndarray) – A 2D numpy array with dtype bool b (numpy.ndarray) – A 2D numpy array with dtype bool The floating point number that corresponds to the Jaccard index. The float value lies inside the interval $$[0, 1]$$. If a and b are equal, then the similarity is maximum and the value output is 1.0. If the areas are exclusive, then the value output by this function is 0.0. float
bob.bio.vein.preprocessor.utils.intersect_ratio(a, b)[source]

Calculates the intersection ratio between the ground-truth and a probe

This function calculates the intersection ratio between a ground-truth mask ($$A$$; probably generated from an annotation) and a probe mask ($$B$$), returning the ratio of overlap when the probe is compared to the ground-truth data:

$R(A,B) = \frac{|A \cap B|}{|A|}$

So, if the probe occupies the entirety of the ground-truth data, then the output of this function is 1.0, otherwise, if areas are exclusive, then this function returns 0.0. The output of this function should be analyzed against the output of intersect_ratio_of_complement(), which provides the complementary information about the intersection of the areas being analyzed.

Parameters: a (numpy.ndarray) – A 2D numpy array with dtype bool, that corresponds to the ground-truth object b (numpy.ndarray) – A 2D numpy array with dtype bool, that corresponds to the probe object that will be compared to the ground-truth The floating point number that corresponds to the overlap ratio. The float value lies inside the interval $$[0, 1]$$. float
bob.bio.vein.preprocessor.utils.intersect_ratio_of_complement(a, b)[source]

Calculates the intersection ratio between the complement of ground-truth and a probe

This function calculates the intersection ratio between the complement of a ground-truth mask ($$A$$; probably generated from an annotation) and a probe mask ($$B$$), returning the ratio of overlap when the probe is compared to the ground-truth data:

$R(A,B) = \frac{|A^c \cap B|}{|A|} = B \setminus A$

So, if the probe is totally inside the ground-truth data, then the output of this function is 0.0, otherwise, if areas are exclusive for example, then this function outputs greater than zero. The output of this function should be analyzed against the output of intersect_ratio(), which provides the complementary information about the intersection of the areas being analyzed.

Parameters: a (numpy.ndarray) – A 2D numpy array with dtype bool, that corresponds to the ground-truth object b (numpy.ndarray) – A 2D numpy array with dtype bool, that corresponds to the probe object that will be compared to the ground-truth The floating point number that corresponds to the overlap ratio between the probe area and the complement of the ground-truth area. There are no bounds for the float value on the right side: $$[0, +\infty)$$. float

## Feature Extractors¶

class bob.bio.vein.extractor.LocalBinaryPatterns(block_size=59, block_overlap=15, lbp_radius=7, lbp_neighbor_count=16, lbp_uniform=True, lbp_circular=True, lbp_rotation_invariant=False, lbp_compare_to_average=False, lbp_add_average=False, sparse_histogram=False, split_histogram=None)[source]

LBP feature extractor

Parameters fixed based on L. Mirmohamadsadeghi and A. Drygajlo. Palm vein recognition using local texture patterns, IET Biometrics, pp. 1-9, 2013.

lbp_features(finger_image, mask)[source]

Computes and returns the LBP features for the given input fingervein image

class bob.bio.vein.extractor.MaximumCurvature(sigma=5)[source]

MiuraMax feature extractor.

Based on N. Miura, A. Nagasaka, and T. Miyatake, Extraction of Finger-Vein Pattern Using Maximum Curvature Points in Image Profiles. Proceedings on IAPR conference on machine vision applications, 9 (2005), pp. 347–350.

Parameters: sigma (int, optional) – standard deviation for the gaussian smoothing kernel used to denoise the input image. The width of the gaussian kernel will be set automatically to 4x this value (in pixels).
binarise(G)[source]

Binarise vein images using a threshold assuming distribution is diphasic

This function implements Step 3 of the paper. It binarises the 2-D array G assuming its histogram is mostly diphasic and using a median value.

Parameters: G (numpy.ndarray) – A 2-dimensional 64-bit array G containing the result of the filtering operation. G has the dimensions of the original image. A 2-dimensional 64-bit float array with the same dimensions of the input image, but containing its vein-binarised version. The output of this function corresponds to the output of the method. numpy.ndarray
connect_centres(V)[source]

Connects vein centres by filtering vein probabilities V

This function does the equivalent of Step 2 / Equation 4 at Miura’s paper.

The operation is applied on a row from the V matrix, which may be acquired horizontally, vertically or on a diagonal direction. The pixel value is then reset in the center of a windowing operation (width = 5) with the following value:

$b[w] = min(max(a[w+1], a[w+2]) + max(a[w-1], a[w-2]))$
Parameters: V (numpy.ndarray) – The accumulated vein centre probabilities V. This is a 2D array with 64-bit floats and is defined by Equation (3) on the paper. A 3-dimensional 64-bit array Cd containing the result of the filtering operation for each of the directions. Cd has the dimensions of $kappa$ and $V_i$. Each of the planes correspond to the horizontal, vertical, +45 and -45 directions. numpy.ndarray
detect_valleys(image, mask)[source]

Detects valleys on the image respecting the mask

This step corresponds to Step 1-1 in the original paper. The objective is, for all 4 cross-sections (z) of the image (horizontal, vertical, 45 and -45 diagonals), to compute the following proposed valley detector as defined in Equation 1, page 348:

$\kappa(z) = \frac{d^2P_f(z)/dz^2}{(1 + (dP_f(z)/dz)^2)^\frac{3}{2}}$

We start the algorithm by smoothing the image with a 2-dimensional gaussian filter. The equation that defines the kernel for the filter is:

$\mathcal{N}(x,y)=\frac{1}{2\pi\sigma^2}e^\frac{-(x^2+y^2)}{2\sigma^2}$

This is done to avoid noise from the raw data (from the sensor). The maximum curvature method then requires we compute the first and second derivative of the image for all cross-sections, as per the equation above.

We instead take the following equivalent approach:

1. construct a gaussian filter
2. take the first (dh/dx) and second (d^2/dh^2) deritivatives of the filter
3. calculate the first and second derivatives of the smoothed signal using the results from 3. This is done for all directions we’re interested in: horizontal, vertical and 2 diagonals. First and second derivatives of a convolved signal

Note

Item 3 above is only possible thanks to the steerable filter property of the gaussian kernel. See “The Design and Use of Steerable Filters” from Freeman and Adelson, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 9, September 1991.

Parameters: image (numpy.ndarray) – an array of 64-bit floats containing the input image mask (numpy.ndarray) – an array, of the same size as image, containing a mask (booleans) indicating where the finger is on image. a 3-dimensional array of 64-bits containing $kappa$ for all considered directions. $kappa$ has the same shape as image, except for the 3rd. dimension, which provides planes for the cross-section valley detections for each of the contemplated directions, in this order: horizontal, vertical, +45 degrees, -45 degrees. numpy.ndarray
eval_vein_probabilities(k)[source]

Evaluates joint vein centre probabilities from cross-sections

This function will take $kappa$ and will calculate the vein centre probabilities taking into consideration valley widths and depths. It aggregates the following steps from the paper:

• [Step 1-2] Detection of the centres of veins
• [Step 1-3] Assignment of scores to the centre positions
• [Step 1-4] Calculation of all the profiles

Once the arrays of curvatures (concavities) are calculated, here is how detection works: The code scans the image in a precise direction (vertical, horizontal, diagonal, etc). It tries to find a concavity on that direction and measure its width (see Wr on Figure 3 on the original paper). It then identifies the centers of the concavity and assign a value to it, which depends on its width (Wr) and maximum depth (where the peak of darkness occurs) in such a concavity. This value is accumulated on a variable (Vt), which is re-used for all directions. Vt represents the vein probabilites from the paper.

Parameters: k (numpy.ndarray) – a 3-dimensional array of 64-bits containing $kappa$ for all considered directions. $kappa$ has the same shape as image, except for the 3rd. dimension, which provides planes for the cross-section valley detections for each of the contemplated directions, in this order: horizontal, vertical, +45 degrees, -45 degrees. The un-accumulated vein centre probabilities V. This is a 3D array with 64-bit floats with the same dimensions of the input array k. You must accumulate (sum) over the last dimension to retrieve the variable V from the paper. numpy.ndarray
class bob.bio.vein.extractor.NormalisedCrossCorrelation[source]

Normalised Cross-Correlation feature extractor

Based on M. Kono, H. Ueki, and S.Umemura. Near-infrared finger vein patterns for personal identification. Appl. Opt. 41(35):7429-7436, 2002

class bob.bio.vein.extractor.PrincipalCurvature(sigma=2, threshold=1.3)[source]

MiuraMax feature extractor

Based on J.H. Choi, W. Song, T. Kim, S.R. Lee and H.C. Kim, Finger vein extraction using gradient normalization and principal curvature. Proceedings on Image Processing: Machine Vision Applications II, SPIE 7251, (2009)

principal_curvature(image, mask)[source]

Computes and returns the Maximum Curvature features for the given input fingervein image

class bob.bio.vein.extractor.RepeatedLineTracking(iterations=3000, r=1, profile_w=21, rescale=True, seed=0)[source]

Repeated Line Tracking feature extractor

Based on N. Miura, A. Nagasaka, and T. Miyatake. Feature extraction of finger vein patterns based on repeated line tracking and its application to personal identification. Machine Vision and Applications, Vol. 15, Num. 4, pp. 194–203, 2004

repeated_line_tracking(finger_image, mask)[source]

Computes and returns the MiuraMax features for the given input fingervein image

skeletonize(img)[source]
class bob.bio.vein.extractor.WideLineDetector(radius=5, threshold=1, g=41, rescale=True)[source]

Wide Line Detector feature extractor

Based on B. Huang, Y. Dai, R. Li, D. Tang and W. Li. Finger-vein authentication based on wide line detector and pattern normalization, Proceedings on 20th International Conference on Pattern Recognition (ICPR), 2010.

wide_line_detector(finger_image, mask)[source]

Computes and returns the Wide Line Detector features for the given input fingervein image

## Matching Algorithms¶

class bob.bio.vein.algorithm.Correlate

Correlate probe and model without cropping

The method is based on “cross-correlation” between a model and a probe image. The difference between this and MiuraMatch is that no cropping takes place on this implementation. We simply fill the excess boundary with zeros and extract the valid correlation region between the probe and the model using skimage.feature.match_template().

enroll(enroll_features)[source]

Enrolls the model by computing an average graph for each model

score(model, probe)[source]

Computes the score between the probe and the model.

Parameters: model (numpy.ndarray) – The model of the user to test the probe agains probe (numpy.ndarray) – The probe to test Value between 0 and 0.5, larger value means a better match float
class bob.bio.vein.algorithm.HammingDistance

This class calculates the Hamming distance between two binary images.

The enrollement and scoring functions of this class are implemented by its base bob.bio.base.algorithm.Distance.

The input to this function should be of binary nature (boolean arrays). Each binary input is first flattened to form a one-dimensional vector. The Hamming distance is then calculated between these two binary vectors.

The current implementation uses scipy.spatial.distance.hamming(), which returns a scalar 64-bit float to represent the proportion of mismatching corresponding bits between the two binary vectors.

The base class constructor parameter is_distance_function is set to False on purpose to ensure that calculated distances are returned as positive values rather than negative.

class bob.bio.vein.algorithm.MiuraMatch(ch=80, cw=90)

Finger vein matching: match ratio via cross-correlation

The method is based on “cross-correlation” between a model and a probe image. It convolves the binary image(s) representing the model with the binary image representing the probe (rotated by 180 degrees), and evaluates how they cross-correlate. If the model and probe are very similar, the output of the correlation corresponds to a single scalar and approaches a maximum. The value is then normalized by the sum of the pixels lit in both binary images. Therefore, the output of this method is a floating-point number in the range $$[0, 0.5]$$. The higher, the better match.

In case model and probe represent images from the same vein structure, but are misaligned, the output is not guaranteed to be accurate. To mitigate this aspect, Miura et al. proposed to add a small cropping factor to the model image, assuming not much information is available on the borders (ch, for the vertical direction and cw, for the horizontal direction). This allows the convolution to yield searches for different areas in the probe image. The maximum value is then taken from the resulting operation. The convolution result is normalized by the pixels lit in both the cropped model image and the matching pixels on the probe that yield the maximum on the resulting convolution.

For this to work properly, input images are supposed to be binary in nature, with zeros and ones.

Based on N. Miura, A. Nagasaka, and T. Miyatake. Feature extraction of finger vein patterns based on repeated line tracking and its application to personal identification. Machine Vision and Applications, Vol. 15, Num. 4, pp. 194–203, 2004

Parameters: ch (int, optional) – Maximum search displacement in y-direction. cw (int, optional) – Maximum search displacement in x-direction.
enroll(enroll_features)[source]

Enrolls the model by computing an average graph for each model

score(model, probe)[source]

Computes the score between the probe and the model.

Parameters: model (numpy.ndarray) – The model of the user to test the probe agains probe (numpy.ndarray) – The probe to test Value between 0 and 0.5, larger value means a better match float