# MetricsΒΆ

Module utilities

 cmc(cmc_scores) Calculates the cumulative match characteristic (CMC) from the given input. correctly_classified_negatives(...) This method returns a blitz::Array composed of booleans that pin-point which negatives where correctly classified in a ‘negative’ score sample, given a threshold. It runs the formula: foreach (element k in negative) if negative[k] < threshold: returnValue[k] = true else: returnValue[k] = false correctly_classified_positives(...) This method returns a blitz::Array composed of booleans that pin-point which positives where correctly classified in a ‘positive’ score sample, given a threshold. It runs the formula: foreach (element k in positive) if positive[k] >= threshold: returnValue[k] = true else: returnValue[k] = false det(( (object)negatives, (object)positives, ...) Calculates the DET curve given a set of positive and negative scores and a desired number of points. Returns a two-dimensional blitz::Array of doubles that express on its rows: eer_rocch(( (object)negatives, ...) Calculates the equal-error-rate (EER) given the input data, on the ROC Convex Hull as done in the Bosaris toolkit (https://sites.google.com/site/bosaristoolkit/). eer_threshold(( (object)negatives, ...) Calculates the threshold that is as close as possible to the equal-error-rate (EER) given the input data. The EER should be the point where the FAR equals the FRR. Graphically, this would be equivalent to the intersection between the ROC (or DET) curves and the identity. epc(( (object)dev_negatives, ...) Calculates the EPC curve given a set of positive and negative scores and a desired number of points. Returns a two-dimensional blitz::Array of doubles that express the X (cost) and Y (HTER on the test set given the min. HTER threshold on the development set) coordinates in this order. Please note that, in order to calculate the EPC curve, one needs two sets of data comprising a development set and a test set. The minimum weighted error is calculated on the development set and then applied to the test set to evaluate the half-total error rate at that position. f_score(( (object)negatives, ...) This method computes F score of the accuracy of the classification. It is a weighted mean of precision and recall measurements. The weight parameter needs to be non-negative real value. In case the weight parameter is 1, the F-score is called F1 score and is a harmonic mean between precision and recall values. far_threshold(( (object)negatives, ...) Computes the threshold such that the real FAR is at least the requested far_value. farfrr(( (object)negatives, ...) Calculates the FA ratio and the FR ratio given positive and negative scores and a threshold. ‘positives’ holds the score information for samples that are labelled to belong to a certain class (a.k.a., ‘signal’ or ‘client’). ‘negatives’ holds the score information for samples that are labelled not to belong to the class (a.k.a., ‘noise’ or ‘impostor’). frr_threshold(( (object)negatives, ...) Computes the threshold such that the real FRR is at least the requested frr_value. min_hter_threshold(( (object)negatives, ...) Calculates the min_weighted_error_rate_threshold() when the cost is 0.5. min_weighted_error_rate_threshold(...) Calculates the threshold that minimizes the error rate, given the input data. mse(estimation, target) Calculates the mean square error between a set of outputs and target values using the following formula: .. ppndf(( (float)value) -> float :) Returns the Deviate Scale equivalent of a false rejection/acceptance ratio. precision_recall(( (object)negatives, ...) Calculates the precision and recall (sensitiveness) values given positive and negative scores and a threshold. precision_recall_curve(( (object)negatives, ...) Calculates the precision-recall curve given a set of positive and negative scores and a number of desired points. recognition_rate(cmc_scores) Calculates the recognition rate from the given input, which is identical to the rank 1 (C)MC value. relevance(input, machine) Calculates the relevance of every input feature to the estimation process rmse(estimation, target) Calculates the root mean square error between a set of outputs and target values using the following formula: .. roc(( (object)negatives, (object)positives, ...) Calculates the ROC curve given a set of positive and negative scores and a desired number of points. Returns a two-dimensional blitz::Array of doubles that express the X (FAR) and Y (FRR) coordinates in this order. The points in which the ROC curve are calculated are distributed uniformily in the range [min(negatives, positives), max(negatives, positives)]. roc_for_far(( (object)negatives, ...) Calculates the ROC curve given a set of positive and negative scores and the FAR values for which the CAR should be computed. The resulting ROC curve holds a copy of the given FAR values (row 0), and the corresponding FRR values (row 1). rocch(( (object)negatives, ...) Calculates the ROC Convex Hull curve given a set of positive and negative scores. Returns a two-dimensional blitz::Array of doubles that express the X (FAR) and Y (FRR) coordinates in this order. rocch2eer(( (object)pfa_pmiss) -> float :) Calculates the threshold that is as close as possible to the equal-error-rate (EER) given the input data.

Plotting

 cmc(cmc_scores[, logx]) Plots the (cumulative) match characteristics curve and returns the maximum rank. det(negatives, positives[, npoints, ...]) Plots Detection Error Trade-off (DET) curve as defined in the paper: det_axis(v, **kwargs) Sets the axis in a DET plot. epc(dev_negatives, dev_positives, ...[, npoints]) Plots Expected Performance Curve (EPC) as defined in the paper: precision_recall_curve(negatives, positives) Plots Precision-Recall curve. roc(negatives, positives[, npoints, CAR]) Plots Receiver Operating Charactaristic (ROC) curve.