Li’s CVPR14 Python API

Signals extraction

bob.rppg.cvpr14.extract_utils.kp66_to_mask(image, keypoints[, indent][, plot]) → mask, mask_points[source]

This function builds a mask on the lower part of the face

The mask is built using selected keypoints retrieved by a Discriminative Response Map Fitting (DRMF) algorithm. Note that the DRMF is not implemented here, and that the keypoints are loaded from file (and are not provided in the package).

Note also that this function is explicitly made for the keypoints set generated by the Matlab software downloaded from http://ibug.doc.ic.ac.uk/resources/drmf-matlab-code-cvpr-2013/

If you decide to use another keypoint detector, you may need to rewrite a function to build the mask from your keypoints.

Parameters:

image (3d numpy array):
The current frame.
keypoints (2d numpy array 66x2):
the set of 66 keypoints retrieved by DRMF.
indent ([Optional] int):
The percentage of the facewidth [in pixels] by which selected keypoints are shifted inside the face to build the mask. THe facewidth is defined by the distance between the two keypoints located on the right and left edge of the face, at the eyes’ height. Default to 10.
plot ([Optional] boolean):
If set to True, plots the current face with the selected keypoints and the built mask. Default to False

Returns

mask (2d numpy boolean array):
A boolean array of the size of the original image, where the region corresponding to the mask is True.
mask_points (list of tuples, 9x2):
The points corresponding to vertices of the mask.
bob.rppg.cvpr14.extract_utils.get_mask(image, mask_points) → mask[source]

This function returns a boolean array where the mask is True.

It turns mask points into a region of interest and returns the corresponding boolean array, of the same size as the image. Taken from https://github.com/jdoepfert/roipoly.py/blob/master/roipoly.py

Parameters

image (3d numpy array):
The current frame.
mask_points (list of tuples, 9x2):
The points corresponding to vertices of the mask.

Returns

mask (2d numpy boolean array):
A boolean array of the size of the original image, where the region corresponding to the mask is True.
bob.rppg.cvpr14.extract_utils.get_good_features_to_track(face, npoints[, quality][, min_distance][, plot]) → corners[source]

This function applies the openCV function “good features to track”

Parameters

face (3d numpy array):
The cropped face image
npoints (int):
The maximum number of strong corners you want to detect
quality ([Optional] float):
The minimum relative quality of the detected corners. Note that increasing this value decreases the number of detected corners. Defaluts to 0.01.
min_distance ([Optional] int):
minimum euclidean distance between detected corners. Defaults to 10.
plot ([Optional] boolean):
if we should plot the currently selected features to track. Defaults to False.

Returns

corners (numpy array of dim (npoints, 1, 2)):
the detected strong corners.
bob.rppg.cvpr14.extract_utils.track_features(previous, current, previous_points[, plot]) → current points[source]

This function projects the features from the previous frame in the current frame.

Parameters

previous (3d numpy array):
the previous frame.
current (3d numpy array):
the current frame.
previous_points (numpy array of dim (npoints, 1, 2)):
the set of keypoints to track (in the previous frame).
plot ([Optional] boolean):
Plots the keypoints projected on the current frame. Defaults to False.

Returns

current_points (numpy array of dim (npoints, 1, 2)):
the set of keypoints in the current frame.
bob.rppg.cvpr14.extract_utils.find_transformation(previous_points, current_points) → transformation matrix:[source]

This function finds the transformation matrix from previous points to current points.

The transformation matrix is found using estimateRigidTransform (fancier alternatives have been tried, but are not that stable).

Parameters

previous_points (numpy array):
Set of ‘starting’ 2d points
current_points (numpy array):
Set of ‘destination’ 2d points

Returns

transformation_matrix (numpy array of dim (3,2)):
the affine transformation matrix between the two sets of points.
bob.rppg.cvpr14.extract_utils.get_current_mask_points(previous_mask_points, transfo_matrix) → current_mask_points[source]

This projects the previous mask points to get the current mask.

Parameters

previous_mask_points (numpy array):
The points forming the mask in the previous frame
transformation_matrix (numpy array (3x2)):
the affine transformation matrix between the two sets of points.

Returns

current_mask_points (numpy array):
The points forming the mask in the current frame
bob.rppg.cvpr14.extract_utils.compute_average_colors_mask(image, mask[, plot]) → green_color[source]

This function computes the average green color within a given mask.

Parameters

image (3d numpy array ):
The image containing the face.
mask (2d numpy boolean array):
A boolean array of the size of the original image, where the region corresponding to the mask is True.
plot ([Optional] boolean):
Plot the mask as an overlay on the original image. Defaults to False.

Returns

color (float):
The average green color inside the mask ROI.
bob.rppg.cvpr14.extract_utils.compute_average_colors_wholeface(image, plot=False)[source]

compute_average_colors_mask(image [, plot]) -> green_color

This function computes the average green color within the provided face image

Parameters

image (3d numpy array ):
The cropped face image
plot ([Optional] boolean):
Plot the mask as an overlay on the original image. Defaults to False.

Returns

color (float):
The average green color inside the face

Illumination rectification

bob.rppg.cvpr14.illum_utils.rectify_illumination(face_color, bg_color, step, length) → rectified color[source]

This function performs the illumination rectification.

The correction is made on the face green values using the background green values, so as to remove global illumination variations in the face green color signal.

Parameters

face_color (1d numpy array):
The mean green value of the face across the video sequence.
bg_color (1d numpy array):
The mean green value of the background across the video sequence.
step (float):
Step size in the filter’s weight adaptation.
length (int):
Length of the filter.

Returns

rectified color (1d numpy array):
The mean green values of the face, corrected for illumination variations.
bob.rppg.cvpr14.illum_utils.nlms(signal, desired_signal, n_filter_taps, step[, initCoeffs][, adapt]) → y, e, w[source]

Normalized least mean square filter.

Based on adaptfilt 0.2: https://pypi.python.org/pypi/adaptfilt/0.2

Parameters

signal (1d numpy array):
The signal to be filtered.
desired_signal (1d numpy array):
The target signal.
n_filter_taps (int):
The number of filter taps (related to the filter order).
step (float):
Adaptation step for the filter weights.
initCoeffs ([Optional] numpy array (1, n_filter_taps)):
Initial values for the weights. Defaults to zero.
adapt ([Optional] boolean):
If True, adapt the filter weights. If False, only filters. Defaults to True.

Returns

y (1d numpy array):
The filtered signal.
e (1d numpy array):
The error signal (difference between filtered and desired)
w (numpy array (1, n_filter_taps)):
The found weights of the filter.

Motion correction

bob.rppg.cvpr14.motion_utils.build_segments(signal, length) → segments[source]

Builds an array containing segments of the signal.

The signal is divided into segments of provided length (no overlap) and the different segments are stacked.

Parameters

signal (1d numpy array):
The signal to be processed.
length (int):
The length of the segments.

Returns

segments (2d numpy array (n_segments, length)):
the segments composing the signal.
end_index (int):
The length of the signal (there may be a trail smaller than a segment at the end of the signal, that will be discarded).
bob.rppg.cvpr14.motion_utils.prune_segments(segments, threshold) → pruned_segments[source]

Remove segments.

Segments are removed if their standard deviation is higher than the provided threshold.

Parameters

segments (2d numpy array):
The set of segments.
threshold (float):
Threshold on the standard deviation.

Returns

pruned_segments (2d numpy array):
The set of “stable” segments.
gaps (list of dim (# of retained segments)):
Boolean list that tells if a gap should be accounted for when building the final signal.
cut_index (list of tuples):
Contains the start and end index of each removed segment. Used for plotting purposes.
bob.rppg.cvpr14.motion_utils.build_final_signal(segments, gaps) → final_signal[source]

Builds the final signal with remaining segments.

Parameters

segments (2d numpy array):
The set of remaining segments.
gaps (list):
Boolean list that tells if a gap should be accounted for when building the final signal.

Returns

final_signal (1d numpy array):
The final signal.
bob.rppg.cvpr14.motion_utils.build_final_signal_cvpr14(segments, gaps)[source]

def build_final_signal_original(segments, gaps) -> final_signal

Warning

This contains a bug !

Builds the final signal, but reproducing the bug found in the code provided by the authors of [li-cvpr-2014]. The bug is in the ‘collage’ of remaining segments. The gap is not always properly accounted for...

Parameters

segments (2d numpy array):
The set of remaining segments.
gaps (list):
Boolean list that tells if a gap should be accounted for when building the final signal.

Returns

final_signal (1d numpy array):
The final signal.

Filtering

bob.rppg.cvpr14.filter_utils.detrend(signal, Lambda) → filtered_signal[source]

This function applies a detrending filter.

This code is based on the following article “An advanced detrending method with application to HRV analysis”. Tarvainen et al., IEEE Trans on Biomedical Engineering, 2002.

Parameters

signal (1d numpy array):
The signal where you want to remove the trend.
Lambda (int):
The smoothing parameter.

Returns

filtered_signal (1d numpy array):
The detrended signal.
bob.rppg.cvpr14.filter_utils.average(signal, window_size) → filtered_signal[source]

Moving average filter.

Parameters

signal (1d numpy array):
The signal to filter.
window_size (int):
The size of the window to compute the average.

Returns

filtered_signal (1d numpy array):
The averaged signal.