mednet.config.models.densenet_pretrained#
DenseNet, to be fine-tuned. Pre-trained on ImageNet.
This configuration contains a version of DenseNet (c.f. TorchVision’s page <alexnet_pytorch_>), modified for a variable number of outputs (defaults to 1).
N.B.: The output layer is always initialized from scratch.
from torch.nn import BCEWithLogitsLoss
from torch.optim import Adam
from mednet.data.augmentations import ElasticDeformation
from mednet.models.densenet import Densenet
model = Densenet(
train_loss=BCEWithLogitsLoss(),
validation_loss=BCEWithLogitsLoss(),
optimizer_type=Adam,
optimizer_arguments=dict(lr=0.0001),
augmentation_transforms=[ElasticDeformation(p=0.2)],
pretrained=True,
dropout=0.1,
)