bob.ip.binseg.models.losses

Loss implementations

Classes

MixJacLoss([lambda_u, jacalpha, ...])

param lambda_u

determines the weighting of SoftJaccard and BCE.

MultiSoftJaccardBCELogitsLoss([alpha])

Implements Equation 3 in [IGLOVIKOV-2018] for the multi-output networks such as HED or Little W-Net.

MultiWeightedBCELogitsLoss()

Weighted Binary Cross-Entropy Loss for multi-layered inputs (e.g.

SoftJaccardBCELogitsLoss([alpha])

Implements the generalized loss function of Equation (3) in [IGLOVIKOV-2018], with J being the Jaccard distance, and H, the Binary Cross-Entropy Loss:

WeightedBCELogitsLoss()

Calculates sum of weighted cross entropy loss.

class bob.ip.binseg.models.losses.WeightedBCELogitsLoss[source]

Bases: _Loss

Calculates sum of weighted cross entropy loss.

Implements Equation 1 in [MANINIS-2016]. The weight depends on the current proportion between negatives and positives in the ground-truth sample being analyzed.

forward(input, target, mask)[source]
Parameters
  • input (torch.Tensor) – Value produced by the model to be evaluated, with the shape [n, c, h, w]

  • target (torch.Tensor) – Ground-truth information with the shape [n, c, h, w]

  • mask (torch.Tensor) – Mask to be use for specifying the region of interest where to compute the loss, with the shape [n, c, h, w]

Returns

loss – The average loss for all input data

Return type

torch.Tensor

class bob.ip.binseg.models.losses.SoftJaccardBCELogitsLoss(alpha=0.7)[source]

Bases: _Loss

Implements the generalized loss function of Equation (3) in [IGLOVIKOV-2018], with J being the Jaccard distance, and H, the Binary Cross-Entropy Loss:

\[L = lpha H + (1-lpha)(1-J)\]

Our implementation is based on torch.nn.BCEWithLogitsLoss.

alpha

determines the weighting of J and H. Default: 0.7

Type

float

forward(input, target, mask)[source]
Parameters
  • input (torch.Tensor) – Value produced by the model to be evaluated, with the shape [n, c, h, w]

  • target (torch.Tensor) – Ground-truth information with the shape [n, c, h, w]

  • mask (torch.Tensor) – Mask to be use for specifying the region of interest where to compute the loss, with the shape [n, c, h, w]

Returns

loss – Loss, in a single entry

Return type

torch.Tensor

class bob.ip.binseg.models.losses.MultiWeightedBCELogitsLoss[source]

Bases: WeightedBCELogitsLoss

Weighted Binary Cross-Entropy Loss for multi-layered inputs (e.g. for Holistically-Nested Edge Detection in [XIE-2015]).

forward(input, target, mask)[source]
Parameters
  • input (iterable over torch.Tensor) – Value produced by the model to be evaluated, with the shape [L, n, c, h, w]

  • target (torch.Tensor) – Ground-truth information with the shape [n, c, h, w]

  • mask (torch.Tensor) – Mask to be use for specifying the region of interest where to compute the loss, with the shape [n, c, h, w]

Returns

loss – The average loss for all input data

Return type

torch.Tensor

class bob.ip.binseg.models.losses.MultiSoftJaccardBCELogitsLoss(alpha=0.7)[source]

Bases: SoftJaccardBCELogitsLoss

Implements Equation 3 in [IGLOVIKOV-2018] for the multi-output networks such as HED or Little W-Net.

alpha

determines the weighting of SoftJaccard and BCE. Default: 0.3

Type

float

forward(input, target, mask)[source]
Parameters
  • input (iterable over torch.Tensor) – Value produced by the model to be evaluated, with the shape [L, n, c, h, w]

  • target (torch.Tensor) – Ground-truth information with the shape [n, c, h, w]

  • mask (torch.Tensor) – Mask to be use for specifying the region of interest where to compute the loss, with the shape [n, c, h, w]

Returns

loss – The average loss for all input data

Return type

torch.Tensor

class bob.ip.binseg.models.losses.MixJacLoss(lambda_u=100, jacalpha=0.7, size_average=None, reduce=None, reduction='mean', pos_weight=None)[source]

Bases: _Loss

Parameters

lambda_u (int) – determines the weighting of SoftJaccard and BCE.

forward(input, target, unlabeled_input, unlabeled_target, ramp_up_factor)[source]
Parameters
Return type

list