Python API¶
This section contains a listing of all functionality available on this library which can be used for vein experiments.
Database Interfaces¶
Common Utilities¶
Database definitions for Vein Recognition

class
bob.bio.vein.database.
AnnotatedArray
[source]¶ Bases:
numpy.ndarray
Defines a numpy array subclass that can carry its own metadata
Vera Fingervein Database¶

class
bob.bio.vein.database.verafinger.
File
(f)[source]¶ Bases:
bob.bio.base.database.BioFile
Implements extra properties of vein files for the Vera Fingervein database
Parameters: f (object) – Lowlevel file (or sample) object that is kept inside

class
bob.bio.vein.database.verafinger.
Database
(**kwargs)[source]¶ Bases:
bob.bio.base.database.BioDatabase
Implements verification API for querying Vera Fingervein database.
UTFVP Database¶

class
bob.bio.vein.database.utfvp.
File
(f)[source]¶ Bases:
bob.bio.base.database.BioFile
Implements extra properties of vein files for the UTFVP database
Parameters: f (object) – Lowlevel file (or sample) object that is kept inside

class
bob.bio.vein.database.utfvp.
Database
(**kwargs)[source]¶ Bases:
bob.bio.base.database.BioDatabase
Implements verification API for querying UTFVP database.
3D Fingervein Database¶

class
bob.bio.vein.database.fv3d.
File
(f)[source]¶ Bases:
bob.bio.base.database.BioFile
Implements extra properties of vein files for the 3D Fingervein database
Parameters: f (object) – Lowlevel file (or sample) object that is kept inside

class
bob.bio.vein.database.fv3d.
Database
(**kwargs)[source]¶ Bases:
bob.bio.base.database.BioDatabase
Implements verification API for querying the 3D Fingervein database.
Preprocessors¶

class
bob.bio.vein.preprocessor.
AnnotatedRoIMask
¶ Bases:
bob.bio.vein.preprocessor.Masker
Devises the mask from the annotated RoI

class
bob.bio.vein.preprocessor.
Cropper
¶ Bases:
object
This is the base class for all croppers
It defines the minimum requirements for all derived cropper classes.

class
bob.bio.vein.preprocessor.
FixedCrop
(top=0, bottom=0, left=0, right=0)¶ Bases:
bob.bio.vein.preprocessor.Cropper
Implements cropping using a fixed suppression of border pixels
The defaults supress no lines from the image and returns an image like the original. If an
bob.bio.vein.database.AnnotatedArray
is passed, then we also check for its.metadata['roi']
component and correct it so that annotated RoI points are consistent on the cropped image.Note
Before choosing values, note you’re responsible for knowing what is the orientation of images fed into this cropper.
Parameters:  top (
int
, optional) – Number of lines to suppress from the top of the image. The top of the image corresponds toy = 0
.  bottom (
int
, optional) – Number of lines to suppress from the bottom of the image. The bottom of the image corresponds toy = height
.  left (
int
, optional) – Number of lines to suppress from the left of the image. The left of the image corresponds tox = 0
.  right (
int
, optional) – Number of lines to suppress from the right of the image. The right of the image corresponds tox = width
.
 top (

class
bob.bio.vein.preprocessor.
FixedMask
(top=0, bottom=0, left=0, right=0)¶ Bases:
bob.bio.vein.preprocessor.Masker
Implements masking using a fixed suppression of border pixels
The defaults mask no lines from the image and returns a mask of the same size of the original image where all values are
True
.Note
Before choosing values, note you’re responsible for knowing what is the orientation of images fed into this masker.
Parameters:  top (
int
, optional) – Number of lines to suppress from the top of the image. The top of the image corresponds toy = 0
.  bottom (
int
, optional) – Number of lines to suppress from the bottom of the image. The bottom of the image corresponds toy = height
.  left (
int
, optional) – Number of lines to suppress from the left of the image. The left of the image corresponds tox = 0
.  right (
int
, optional) – Number of lines to suppress from the right of the image. The right of the image corresponds tox = width
.
 top (

class
bob.bio.vein.preprocessor.
HistogramEqualization
¶ Bases:
bob.bio.vein.preprocessor.Filter
Applies histogram equalization on the input image inside the mask.
In this implementation, only the pixels that lie inside the mask will be used to calculate the histogram equalization parameters. Because of this particularity, we don’t use Bob’s implementation for histogram equalization and have one based exclusively on scikitimage.

class
bob.bio.vein.preprocessor.
HuangNormalization
(padding_width=5, padding_constant=51)¶ Bases:
bob.bio.vein.preprocessor.Normalizer
Simple finger normalization from Huang et. al
Based on B. Huang, Y. Dai, R. Li, D. Tang and W. Li, Fingervein authentication based on wide line detector and pattern normalization, Proceedings on 20th International Conference on Pattern Recognition (ICPR), 2010.
This implementation aligns the finger to the centre of the image using an affine transformation. Elliptic projection which is described in the referenced paper is not included.
In order to defined the affine transformation to be performed, the algorithm first calculates the center for each edge (column wise) and calculates the best linear fit parameters for a straight line passing through those points.

class
bob.bio.vein.preprocessor.
KonoMask
(sigma=5, padder=<bob.bio.vein.preprocessor.Padder object>)¶ Bases:
bob.bio.vein.preprocessor.Masker
Estimates the finger region given an input NIR image using Kono et al.
This method is based on the work of M. Kono, H. Ueki and S. Umemura. Nearinfrared finger vein patterns for personal identification, Applied Optics, Vol. 41, Issue 35, pp. 74297436 (2002).
Parameters:  sigma (
float
, optional) – The standard deviation of the gaussian blur filter to apply for lowpassing the input image (background extraction). Defaults to5
.  padder (
Padder
, optional) – If passed, will pad the image before evaluating the mask. The returned value will have the padding removed and is, therefore, of the exact size of the input image.
 sigma (

class
bob.bio.vein.preprocessor.
LeeMask
(filter_height=4, filter_width=40, padder=<bob.bio.vein.preprocessor.Padder object>)¶ Bases:
bob.bio.vein.preprocessor.Masker
Estimates the finger region given an input NIR image using Lee et al.
This method is based on the work of Finger vein recognition using minutiabased alignment and local binary patternbased feature extraction, E.C. Lee, H.C. Lee and K.R. Park, International Journal of Imaging Systems and Technology, Volume 19, Issue 3, September 2009, Pages 175–178, doi: 10.1002/ima.20193
This code is based on the Matlab implementation by Bram Ton, available at:
In this method, we calculate the mask of the finger independently for each column of the input image. Firstly, the image is convolved with a [1,1] filter of size
(self.filter_height, self.filter_width)
. Then, the upper and lower parts of the resulting filtered image are separated. The location of the maxima in the upper part is located. The same goes for the location of the minima in the lower part. The mask is then calculated, per column, by considering it starts in the point where the maxima is in the upper part and goes up to the point where the minima is detected on the lower part.Parameters:

class
bob.bio.vein.preprocessor.
Masker
¶ Bases:
object
This is the base class for all maskers
It defines the minimum requirements for all derived masker classes.

class
bob.bio.vein.preprocessor.
NoCrop
¶ Bases:
bob.bio.vein.preprocessor.FixedCrop
Convenience: same as FixedCrop()

class
bob.bio.vein.preprocessor.
NoFilter
¶ Bases:
bob.bio.vein.preprocessor.Filter
Applies no filtering on the input image, returning it without changes

class
bob.bio.vein.preprocessor.
NoMask
¶ Bases:
bob.bio.vein.preprocessor.FixedMask
Convenience: same as FixedMask()

class
bob.bio.vein.preprocessor.
NoNormalization
¶ Bases:
bob.bio.vein.preprocessor.Normalizer
Trivial implementation with no normalization

class
bob.bio.vein.preprocessor.
Normalizer
¶ Bases:
object
Objects of this class normalize the input image orientation and scale

class
bob.bio.vein.preprocessor.
Padder
(padding_width=5, padding_constant=51)¶ Bases:
object
A class that pads the input image returning a new object
Parameters:  padding_width (
int
, optional) – How much padding (in pixels) to add around the borders of the input image. We normally always keep this value on its default (5 pixels). This parameter is always used before normalizing the finger orientation.  padding_constant (
int
, optional) – What is the value of the pixels added to the padding. This number should be a value between 0 and 255. (From Pedro Tome: for UTFVP (highquality samples), use 0. For the VERA Fingervein database (lowquality samples), use 51 (that corresponds to 0.2 in a float image with values between 0 and 1). This parameter is always used before normalizing the finger orientation.
 padding_width (

class
bob.bio.vein.preprocessor.
Preprocessor
(crop, mask, normalize, filter, **kwargs)¶ Bases:
bob.bio.base.preprocessor.Preprocessor
Extracts the mask and preprocesses fingervein images.
In this implementation, the finger image is (in this order):
 The image is precropped to remove obvious nonfinger image parts
 The mask is extrapolated from the image using one of our
Masker
’s concrete implementations  The image is normalized with one of our
Normalizer
’s  The image is filtered with one of our
Filter
’s
Parameters:  crop (
Cropper
) – An object that will perform precropping on the input image before a mask can be estimated. It removes parts of the image which are surely not part of the finger region you’ll want to consider for the next steps.  mask (
Masker
) – An object representing a Masker instance which will extrapolate the mask from the input image.  normalize (
Normalizer
) – An object representing a Normalizer instance which will normalize the input image and its mask returning a new image mask pair.  filter (
Filter
) – An object representing a Filter instance will will filter the input image and return a new filtered image. The filter instance also receives the extrapolated mask so it can, if desired, only apply the filtering operation where the mask has a value ofTrue

class
bob.bio.vein.preprocessor.
TomesLeeMask
(filter_height=4, filter_width=40, padder=<bob.bio.vein.preprocessor.Padder object>)¶ Bases:
bob.bio.vein.preprocessor.Masker
Estimates the finger region given an input NIR image using Lee et al.
This method is based on the work of Finger vein recognition using minutiabased alignment and local binary patternbased feature extraction, E.C. Lee, H.C. Lee and K.R. Park, International Journal of Imaging Systems and Technology, Volume 19, Issue 3, September 2009, Pages 175–178, doi: 10.1002/ima.20193
This code is a variant of the Matlab implementation by Bram Ton, available at:
In this variant from Pedro Tome, the technique of filtering the image with a horizontal filter is also applied on the vertical axis. The objective is to find better limits on the horizontal axis in case finger images show the finger tip. If that is not your case, you may use the original variant
LeeMask
above.Parameters:

class
bob.bio.vein.preprocessor.
WatershedMask
(model, foreground_threshold, background_threshold)¶ Bases:
bob.bio.vein.preprocessor.Masker
Estimates the finger region given an input NIR image using Watershedding
This method uses the Watershedding Morphological Algorithm <https://en.wikipedia.org/wiki/Watershed_(image_processing)> for determining the finger mask given an input image.
The masker works first by determining image edges using a simple 2D Sobel filter. The next step is to determine markers in the image for both the finger region and background. Markers are set on the image by using a pretrained feedforward neural network model (multilayer perceptron or MLP) learned from existing annotations. The model is trained in a separate program and operates on 3x3 regions around the pixel to be predicted for finger/background. The
(y,x)
location also is provided as input to the classifier. The feature vector is then composed of 9 pixel values plus they
andx
(normalized) coordinates of the pixel. The network then provides a prediction that depends on these input parameters. The closer the output is to1.0
, the more likely it is from within the finger region.Values output by the network are thresholded in order to remove uncertain markers. The
threshold
parameter is configurable.A series of morphological opening operations is used to, given the neural net markers, remove noise before watershedding the edges from the Sobel’ed original image.
Parameters:  model (str) – Path to the model file to be used for generating finger/background markers. This model should be pretrained using a separate program.
 foreground_threshold (float) – Threshold given a logistic regression output
(interval \([0, 1]\)) for which we consider finger markers provided
by the network. The higher the value, the more selective the algorithm
will be and the less (foreground) markers will be used from the network
selection. This value should be a floating point number in the openset
interval \((0.0, 1.0)\). If
background_threshold
is not set, values for background selection will be set to \(1.0T\), whereT
represents this threshold.  background_threshold (float) – Threshold given a logistic regression output
(interval \([0, 1]\)) for which we consider finger markers provided
by the network. The smaller the value, the more selective the algorithm
will be and the less (background) markers will be used from the network
selection. This value should be a floating point number in the openset
interval \((0.0, 1.0)\). If
foreground_threshold
is not set, values for foreground selection will be set to \(1.0T\), whereT
represents this threshold.

run
(image)[source]¶ Fully preprocesses the input image and returns intermediate results
Parameters: image (numpy.ndarray) – A 2D numpy array of type uint8
with the input imageReturns: A 2D numpy array of type uint8
with the markers for foreground and background, selected by the neural network modelnumpy.ndarray: A 2D numpy array of type
float64
with the edges used to define the borders of the watermasking processnumpy.ndarray: A 2D numpy array of type boolean with the caculated mask.
True
values correspond to regions where the finger is locatedReturn type: numpy.ndarray
Preprocessor utilities¶
Utilities for preprocessing vein imagery

bob.bio.vein.preprocessor.utils.
assert_points
(area, points)[source]¶ Checks all points fall within the determined shape region, inclusively
This assertion function, test all points given in
points
fall within a certain area provided inarea
.Parameters:  area (tuple) – A tuple containing the size of the limiting area where the points should all be in.
 points (numpy.ndarray) – A 2D numpy ndarray with any number of rows (points)
and 2 columns (representing
y
andx
coordinates respectively), or any type convertible to this format. This array contains the points that will be checked for conformity. In case one of the points doesn’t fall into the determined area an assertion is raised.
Raises: AssertionError
– In case one of the input points does not fall within the area defined.

bob.bio.vein.preprocessor.utils.
fix_points
(area, points)[source]¶ Checks/fixes all points so they fall within the determined shape region
Points which are lying outside the determined area will be brought into the area by moving the offending coordinate to the border of the said area.
Parameters:  area (tuple) – A tuple containing the size of the limiting area where the points should all be in.
 points (numpy.ndarray) – A 2D
numpy.ndarray
with any number of rows (points) and 2 columns (representingy
andx
coordinates respectively), or any type convertible to this format. This array contains the points that will be checked/fixed for conformity. In case one of the points doesn’t fall into the determined area, it is silently corrected so it does.
Returns: A new array of points with corrected coordinates
Return type:

bob.bio.vein.preprocessor.utils.
poly_to_mask
(shape, points)[source]¶ Generates a binary mask from a set of 2D points
Parameters:  shape (tuple) – A tuple containing the size of the output mask in height and
width, for Bob compatibility
(y, x)
.  points (list) – A list of tuples containing the polygon points that form a
region on the target mask. A line connecting these points will be drawn
and all the points in the mask that fall on or within the polygon line,
will be set to
True
. All other points will have a value ofFalse
.
Returns: A 2D numpy ndarray with
dtype=bool
with the mask generated with the determined shape, using the points for the polygon.Return type:  shape (tuple) – A tuple containing the size of the output mask in height and
width, for Bob compatibility

bob.bio.vein.preprocessor.utils.
mask_to_image
(mask, dtype=<class 'numpy.uint8'>)[source]¶ Converts a binary (boolean) mask into an integer or floatingpoint image
This function converts a boolean binary mask into an image of the desired type by setting the points where
False
is set to 0 and points whereTrue
is set to the most adequate value taking into consideration the destination data typedtype
. Here are support types and their ranges: numpy.uint8:
[0, (2^8)1]
 numpy.uint16:
[0, (2^16)1]
 numpy.uint32:
[0, (2^32)1]
 numpy.uint64:
[0, (2^64)1]
 numpy.float32:
[0, 1.0]
(fixed)  numpy.float64:
[0, 1.0]
(fixed)  numpy.float128:
[0, 1.0]
(fixed)
All other types are currently unsupported.
Parameters:  mask (numpy.ndarray) – A 2D numpy ndarray with boolean data type, containing the mask that will be converted into an image.
 dtype (numpy.dtype) – A valid numpy datatype from the list above for the resulting image
Returns: With the designated data type, containing the binary image formed from the mask.
Return type: Raises: TypeError
– If the type is not supported by this function numpy.uint8:

bob.bio.vein.preprocessor.utils.
show_image
(image)[source]¶ Shows a single image using
PIL.Image.Image.show()
Warning
This function opens a new window. You must be operating interactively in a windowing system for it to work properly.
Parameters: image (numpy.ndarray) – A 2D numpy.ndarray compose of 8bit unsigned integers containing the original image

bob.bio.vein.preprocessor.utils.
draw_mask_over_image
(image, mask, color='red')[source]¶ Plots the mask over the image of a finger, for debugging purposes
Parameters:  image (numpy.ndarray) – A 2D numpy.ndarray compose of 8bit unsigned integers containing the original image
 mask (numpy.ndarray) – A 2D numpy.ndarray compose of boolean values containing the calculated mask
Returns: An image in PIL format
Return type:

bob.bio.vein.preprocessor.utils.
show_mask_over_image
(image, mask, color='red')[source]¶ Plots the mask over the image of a finger using
PIL.Image.Image.show()
Warning
This function opens a new window. You must be operating interactively in a windowing system for it to work properly.
Parameters:  image (numpy.ndarray) – A 2D numpy.ndarray compose of 8bit unsigned integers containing the original image
 mask (numpy.ndarray) – A 2D numpy.ndarray compose of boolean values containing the calculated mask

bob.bio.vein.preprocessor.utils.
jaccard_index
(a, b)[source]¶ Calculates the intersection over union for two masks
This function calculates the Jaccard index:
\[\begin{split}J(A,B) &= \frac{A \cap B}{A \cup B} \\ &= \frac{A \cap B}{A+BA \cup B}\end{split}\]Parameters:  a (numpy.ndarray) – A 2D numpy array with dtype
bool
 b (numpy.ndarray) – A 2D numpy array with dtype
bool
Returns: The floating point number that corresponds to the Jaccard index. The float value lies inside the interval \([0, 1]\). If
a
andb
are equal, then the similarity is maximum and the value output is1.0
. If the areas are exclusive, then the value output by this function is0.0
.Return type:  a (numpy.ndarray) – A 2D numpy array with dtype

bob.bio.vein.preprocessor.utils.
intersect_ratio
(a, b)[source]¶ Calculates the intersection ratio between the groundtruth and a probe
This function calculates the intersection ratio between a groundtruth mask (\(A\); probably generated from an annotation) and a probe mask (\(B\)), returning the ratio of overlap when the probe is compared to the groundtruth data:
\[R(A,B) = \frac{A \cap B}{A}\]So, if the probe occupies the entirety of the groundtruth data, then the output of this function is
1.0
, otherwise, if areas are exclusive, then this function returns0.0
. The output of this function should be analyzed against the output ofintersect_ratio_of_complement()
, which provides the complementary information about the intersection of the areas being analyzed.Parameters:  a (numpy.ndarray) – A 2D numpy array with dtype
bool
, that corresponds to the groundtruth object  b (numpy.ndarray) – A 2D numpy array with dtype
bool
, that corresponds to the probe object that will be compared to the groundtruth
Returns: The floating point number that corresponds to the overlap ratio. The float value lies inside the interval \([0, 1]\).
Return type:  a (numpy.ndarray) – A 2D numpy array with dtype

bob.bio.vein.preprocessor.utils.
intersect_ratio_of_complement
(a, b)[source]¶ Calculates the intersection ratio between the complement of groundtruth and a probe
This function calculates the intersection ratio between the complement of a groundtruth mask (\(A\); probably generated from an annotation) and a probe mask (\(B\)), returning the ratio of overlap when the probe is compared to the groundtruth data:
\[R(A,B) = \frac{A^c \cap B}{A} = B \setminus A\]So, if the probe is totally inside the groundtruth data, then the output of this function is
0.0
, otherwise, if areas are exclusive for example, then this function outputs greater than zero. The output of this function should be analyzed against the output ofintersect_ratio()
, which provides the complementary information about the intersection of the areas being analyzed.Parameters:  a (numpy.ndarray) – A 2D numpy array with dtype
bool
, that corresponds to the groundtruth object  b (numpy.ndarray) – A 2D numpy array with dtype
bool
, that corresponds to the probe object that will be compared to the groundtruth
Returns: The floating point number that corresponds to the overlap ratio between the probe area and the complement of the groundtruth area. There are no bounds for the float value on the right side: \([0, +\infty)\).
Return type:  a (numpy.ndarray) – A 2D numpy array with dtype
Feature Extractors¶

class
bob.bio.vein.extractor.
LocalBinaryPatterns
(block_size=59, block_overlap=15, lbp_radius=7, lbp_neighbor_count=16, lbp_uniform=True, lbp_circular=True, lbp_rotation_invariant=False, lbp_compare_to_average=False, lbp_add_average=False, sparse_histogram=False, split_histogram=None)[source]¶ Bases:
bob.bio.base.extractor.Extractor
LBP feature extractor
Parameters fixed based on L. Mirmohamadsadeghi and A. Drygajlo. Palm vein recognition using local texture patterns, IET Biometrics, pp. 19, 2013.

class
bob.bio.vein.extractor.
MaximumCurvature
(sigma=5)[source]¶ Bases:
bob.bio.base.extractor.Extractor
MiuraMax feature extractor.
Based on N. Miura, A. Nagasaka, and T. Miyatake, Extraction of FingerVein Pattern Using Maximum Curvature Points in Image Profiles. Proceedings on IAPR conference on machine vision applications, 9 (2005), pp. 347–350.
Parameters: sigma ( int
, optional) – standard deviation for the gaussian smoothing kernel used to denoise the input image. The width of the gaussian kernel will be set automatically to 4x this value (in pixels).
binarise
(G)[source]¶ Binarise vein images using a threshold assuming distribution is diphasic
This function implements Step 3 of the paper. It binarises the 2D array
G
assuming its histogram is mostly diphasic and using a median value.Parameters: G (numpy.ndarray) – A 2dimensional 64bit array G
containing the result of the filtering operation.G
has the dimensions of the original image.Returns: A 2dimensional 64bit float array with the same dimensions of the input image, but containing its veinbinarised version. The output of this function corresponds to the output of the method. Return type: numpy.ndarray

connect_centres
(V)[source]¶ Connects vein centres by filtering vein probabilities
V
This function does the equivalent of Step 2 / Equation 4 at Miura’s paper.
The operation is applied on a row from the
V
matrix, which may be acquired horizontally, vertically or on a diagonal direction. The pixel value is then reset in the center of a windowing operation (width = 5) with the following value:\[b[w] = min(max(a[w+1], a[w+2]) + max(a[w1], a[w2]))\]Parameters: V (numpy.ndarray) – The accumulated vein centre probabilities V
. This is a 2D array with 64bit floats and is defined by Equation (3) on the paper.Returns: A 3dimensional 64bit array Cd
containing the result of the filtering operation for each of the directions.Cd
has the dimensions of $kappa$ and $V_i$. Each of the planes correspond to the horizontal, vertical, +45 and 45 directions.Return type: numpy.ndarray

detect_valleys
(image, mask)[source]¶ Detects valleys on the image respecting the mask
This step corresponds to Step 11 in the original paper. The objective is, for all 4 crosssections (z) of the image (horizontal, vertical, 45 and 45 diagonals), to compute the following proposed valley detector as defined in Equation 1, page 348:
\[\kappa(z) = \frac{d^2P_f(z)/dz^2}{(1 + (dP_f(z)/dz)^2)^\frac{3}{2}}\]We start the algorithm by smoothing the image with a 2dimensional gaussian filter. The equation that defines the kernel for the filter is:
\[\mathcal{N}(x,y)=\frac{1}{2\pi\sigma^2}e^\frac{(x^2+y^2)}{2\sigma^2}\]This is done to avoid noise from the raw data (from the sensor). The maximum curvature method then requires we compute the first and second derivative of the image for all crosssections, as per the equation above.
We instead take the following equivalent approach:
 construct a gaussian filter
 take the first (dh/dx) and second (d^2/dh^2) deritivatives of the filter
 calculate the first and second derivatives of the smoothed signal using the results from 3. This is done for all directions we’re interested in: horizontal, vertical and 2 diagonals. First and second derivatives of a convolved signal
Note
Item 3 above is only possible thanks to the steerable filter property of the gaussian kernel. See “The Design and Use of Steerable Filters” from Freeman and Adelson, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 9, September 1991.
Parameters:  image (numpy.ndarray) – an array of 64bit floats containing the input image
 mask (numpy.ndarray) – an array, of the same size as
image
, containing a mask (booleans) indicating where the finger is onimage
.
Returns: a 3dimensional array of 64bits containing $kappa$ for all considered directions. $kappa$ has the same shape as
image
, except for the 3rd. dimension, which provides planes for the crosssection valley detections for each of the contemplated directions, in this order: horizontal, vertical, +45 degrees, 45 degrees.Return type:

eval_vein_probabilities
(k)[source]¶ Evaluates joint vein centre probabilities from crosssections
This function will take $kappa$ and will calculate the vein centre probabilities taking into consideration valley widths and depths. It aggregates the following steps from the paper:
 [Step 12] Detection of the centres of veins
 [Step 13] Assignment of scores to the centre positions
 [Step 14] Calculation of all the profiles
Once the arrays of curvatures (concavities) are calculated, here is how detection works: The code scans the image in a precise direction (vertical, horizontal, diagonal, etc). It tries to find a concavity on that direction and measure its width (see Wr on Figure 3 on the original paper). It then identifies the centers of the concavity and assign a value to it, which depends on its width (Wr) and maximum depth (where the peak of darkness occurs) in such a concavity. This value is accumulated on a variable (Vt), which is reused for all directions. Vt represents the vein probabilites from the paper.
Parameters: k (numpy.ndarray) – a 3dimensional array of 64bits containing $kappa$ for all considered directions. $kappa$ has the same shape as image
, except for the 3rd. dimension, which provides planes for the crosssection valley detections for each of the contemplated directions, in this order: horizontal, vertical, +45 degrees, 45 degrees.Returns: The unaccumulated vein centre probabilities V
. This is a 3D array with 64bit floats with the same dimensions of the input arrayk
. You must accumulate (sum) over the last dimension to retrieve the variableV
from the paper.Return type: numpy.ndarray


class
bob.bio.vein.extractor.
NormalisedCrossCorrelation
[source]¶ Bases:
bob.bio.base.extractor.Extractor
Normalised CrossCorrelation feature extractor
Based on M. Kono, H. Ueki, and S.Umemura. Nearinfrared finger vein patterns for personal identification. Appl. Opt. 41(35):74297436, 2002

class
bob.bio.vein.extractor.
PrincipalCurvature
(sigma=2, threshold=1.3)[source]¶ Bases:
bob.bio.base.extractor.Extractor
MiuraMax feature extractor
Based on J.H. Choi, W. Song, T. Kim, S.R. Lee and H.C. Kim, Finger vein extraction using gradient normalization and principal curvature. Proceedings on Image Processing: Machine Vision Applications II, SPIE 7251, (2009)

class
bob.bio.vein.extractor.
RepeatedLineTracking
(iterations=3000, r=1, profile_w=21, rescale=True, seed=0)[source]¶ Bases:
bob.bio.base.extractor.Extractor
Repeated Line Tracking feature extractor
Based on N. Miura, A. Nagasaka, and T. Miyatake. Feature extraction of finger vein patterns based on repeated line tracking and its application to personal identification. Machine Vision and Applications, Vol. 15, Num. 4, pp. 194–203, 2004

class
bob.bio.vein.extractor.
WideLineDetector
(radius=5, threshold=1, g=41, rescale=True)[source]¶ Bases:
bob.bio.base.extractor.Extractor
Wide Line Detector feature extractor
Based on B. Huang, Y. Dai, R. Li, D. Tang and W. Li. Fingervein authentication based on wide line detector and pattern normalization, Proceedings on 20th International Conference on Pattern Recognition (ICPR), 2010.
Matching Algorithms¶

class
bob.bio.vein.algorithm.
Correlate
¶ Bases:
bob.bio.base.algorithm.Algorithm
Correlate probe and model without cropping
The method is based on “crosscorrelation” between a model and a probe image. The difference between this and
MiuraMatch
is that no cropping takes place on this implementation. We simply fill the excess boundary with zeros and extract the valid correlation region between the probe and the model usingskimage.feature.match_template()
.
score
(model, probe)[source]¶ Computes the score between the probe and the model.
Parameters:  model (numpy.ndarray) – The model of the user to test the probe agains
 probe (numpy.ndarray) – The probe to test
Returns: Value between 0 and 0.5, larger value means a better match
Return type:


class
bob.bio.vein.algorithm.
HammingDistance
¶ Bases:
bob.bio.base.algorithm.Distance
This class calculates the Hamming distance between two binary images.
The enrollement and scoring functions of this class are implemented by its base
bob.bio.base.algorithm.Distance
.The input to this function should be of binary nature (boolean arrays). Each binary input is first flattened to form a onedimensional vector. The Hamming distance is then calculated between these two binary vectors.
The current implementation uses
scipy.spatial.distance.hamming()
, which returns a scalar 64bitfloat
to represent the proportion of mismatching corresponding bits between the two binary vectors.The base class constructor parameter
is_distance_function
is set toFalse
on purpose to ensure that calculated distances are returned as positive values rather than negative.

class
bob.bio.vein.algorithm.
MiuraMatch
(ch=80, cw=90)¶ Bases:
bob.bio.base.algorithm.Algorithm
Finger vein matching: match ratio via crosscorrelation
The method is based on “crosscorrelation” between a model and a probe image. It convolves the binary image(s) representing the model with the binary image representing the probe (rotated by 180 degrees), and evaluates how they crosscorrelate. If the model and probe are very similar, the output of the correlation corresponds to a single scalar and approaches a maximum. The value is then normalized by the sum of the pixels lit in both binary images. Therefore, the output of this method is a floatingpoint number in the range \([0, 0.5]\). The higher, the better match.
In case model and probe represent images from the same vein structure, but are misaligned, the output is not guaranteed to be accurate. To mitigate this aspect, Miura et al. proposed to add a small cropping factor to the model image, assuming not much information is available on the borders (
ch
, for the vertical direction andcw
, for the horizontal direction). This allows the convolution to yield searches for different areas in the probe image. The maximum value is then taken from the resulting operation. The convolution result is normalized by the pixels lit in both the cropped model image and the matching pixels on the probe that yield the maximum on the resulting convolution.For this to work properly, input images are supposed to be binary in nature, with zeros and ones.
Based on N. Miura, A. Nagasaka, and T. Miyatake. Feature extraction of finger vein patterns based on repeated line tracking and its application to personal identification. Machine Vision and Applications, Vol. 15, Num. 4, pp. 194–203, 2004
Parameters: 
score
(model, probe)[source]¶ Computes the score between the probe and the model.
Parameters:  model (numpy.ndarray) – The model of the user to test the probe agains
 probe (numpy.ndarray) – The probe to test
Returns: Value between 0 and 0.5, larger value means a better match
Return type:
