Usage

This package supports a fully reproducible research experimentation cycle for semantic binary segmentation with support for the following activities:

  • Training: Images are fed to a Fully Convolutional Deep Neural Network (FCN), that is trained to reconstruct annotations (pre-segmented binary maps), automatically, via error back propagation. The objective of this phase is to produce an FCN model.

  • Inference (prediction): The FCN is used to generate vessel map predictions

  • Evaluation: Vessel map predictions are used evaluate FCN performance against provided annotations, or visualize prediction results overlayed on the original raw images.

  • Comparison: Use evaluation results to compare performance as you like, or to evaluate the significance between the results of two systems on the same dataset.

Whereas we provide command-line interfaces (CLI) that implement each of the phases above, we also provide command aggregators that can run all of the phases. Both interfaces are configurable using Bob’s extensible configuration framework. In essence, each command-line option may be provided as a variable with the same name in a Python file. Each file may combine any number of variables that are pertinent to an application.

Tip

For reproducibility, we recommend you stick to configuration files when parameterizing our CLI. Notice some of the options in the CLI interface (e.g. --dataset) cannot be passed via the actual command-line as it may require complex Python types that cannot be synthetized in a single input parameter.

The following flowchart represents the various experiment phases and output results that can be produced for each of our CLI interfaces (rounded white rectangles). Processing subproducts (marked in blue), are stored on disk by the end of each step.

digraph framework {

    graph [
        rankdir=LR,
        ];
    edge [
        fontname=Helvetica,
        fontsize=12,
        fontcolor=blue,
        minlen=2,
        labeldistance=2.5,
        ];

    node [
        fontname=Helvetica,
        fontsize=12,
        fontcolor=black,
        shape=record,
        style="filled,rounded",
        fillcolor=grey92,
        ];

    dataset [
        label="<train>\nTraining\n\n|<test>\nTest\n\n",
        fillcolor=yellow,
        style="filled",
        ];

    {rank = min; dataset;}

    subgraph cluster_experiment {
        label=<<b>experiment</b>>;
        shape=record;
        style="filled,rounded";
        fillcolor=white;
        train;

        subgraph cluster_analyze {
            label=<<b>analyze</b>>;
            predict;
            evaluate;
            compare;
        }
    }

    figure, table [
        fillcolor=lightblue,
        style="filled",
    ];
    {rank = max; figure; table; }

    dataset:train -> train [headlabel="sample + label [+ mask]",labelangle=30];
    dataset:test -> predict [headlabel="sample",labelangle=30];
    train -> predict [headlabel="model"];
    dataset:test -> evaluate [headlabel="label"];
    predict -> evaluate [headlabel="probabilities    ",labelangle=-30];
    evaluate -> compare [headlabel="metrics"];
    compare -> figure;
    compare -> table;
}

Fig. 1 Framework actions and CLI

We provide a number of preset configuration files that can be used in one or more of the activities described in this section. Our command-line framework allows you to refer to these preset configuration files using special names (a.k.a. “resources”), that procure and load these for you automatically. Aside preset configuration files, you may also create your own to extend existing baseline experiments by locally copying and modifying one of our configuration resources.