Command-Line Interface (CLI)

This package provides a single entry point for all of its applications using Bob’s unified CLI mechanism. A list of available applications can be retrieved using:

$ bob tb --help
Usage: bob tb [OPTIONS] COMMAND [ARGS]...

  Active Tuberculosis Detection On CXR commands.

Options:
  -?, -h, --help  Show this message and exit.

Commands:
  aggregpred      Aggregate multiple predictions csv files into one
  compare         Compares multiple systems together
  config          Commands for listing, describing and copying...
  dataset         Commands for listing and verifying datasets
  evaluate        Evaluates a CNN on a tuberculosis prediction task.
  predict         Predicts Tuberculosis presence (probabilities) on input...
  predtojson      Convert predictions to dataset
  train           Trains an CNN to perform tuberculosis detection
  train-analysis  Analyze the training logs for loss evolution and...

Setup

A CLI application to list and check installed (raw) datasets.

$ bob tb dataset --help
Usage: bob tb dataset [OPTIONS] COMMAND [ARGS]...

  Commands for listing and verifying datasets

Options:
  -?, -h, --help  Show this message and exit.

Commands:
  check  Checks file access on one or more datasets
  list   Lists all supported and configured datasets

List available datasets

Lists supported and configured raw datasets.

$ bob tb dataset list --help
Usage: bob tb dataset list [OPTIONS]

  Lists all supported and configured datasets

Options:
  -v, --verbose   Increase the verbosity level from 0 (only error messages) to
                  1 (warnings), 2 (log messages), 3 (debug information) by
                  adding the --verbose option as often as desired (e.g. '-vvv'
                  for debug).
  -h, -?, --help  Show this message and exit.

  Examples:

      1. To install a dataset, set up its data directory ("datadir").  For
         example, to setup access to Montgomery files you downloaded locally at
         the directory "/path/to/montgomery/files", do the following:
  
         $ bob config set "bob.med.tb.montgomery.datadir" "/path/to/montgomery/files"

         Notice this setting **is** case-sensitive.

      2. List all raw datasets supported (and configured):

         $ bob tb dataset list

Check available datasets

Checks if we can load all files listed for a given dataset (all subsets in all protocols).

$ bob tb dataset check --help
Usage: bob tb dataset check [OPTIONS] [DATASET]...

  Checks file access on one or more datasets

Options:
  -l, --limit INTEGER RANGE  Limit check to the first N samples in each
                             dataset, making the check sensibly faster.  Set
                             it to zero to check everything.  [x>=0; required]
  -v, --verbose              Increase the verbosity level from 0 (only error
                             messages) to 1 (warnings), 2 (log messages), 3
                             (debug information) by adding the --verbose
                             option as often as desired (e.g. '-vvv' for
                             debug).
  -?, -h, --help             Show this message and exit.

  Examples:

  1. Check if all files of the Montgomery dataset can be loaded:

     $ bob tb dataset check -vv montgomery

  2. Check if all files of multiple installed datasets can be loaded:

     $ bob tb dataset check -vv montgomery shenzhen

  3. Check if all files of all installed datasets can be loaded:

     $ bob tb dataset check

Preset Configuration Resources

A CLI application allows one to list, inspect and copy available configuration resources exported by this package.

$ bob tb config --help
Usage: bob tb config [OPTIONS] COMMAND [ARGS]...

  Commands for listing, describing and copying configuration resources

Options:
  -?, -h, --help  Show this message and exit.

Commands:
  describe  Describes a specific configuration file
  list      Lists configuration files installed

Listing Resources

$ bob tb config list --help
Usage: bob tb config list [OPTIONS]

  Lists configuration files installed

Options:
  -v, --verbose   Increase the verbosity level from 0 (only error messages) to
                  1 (warnings), 2 (log messages), 3 (debug information) by
                  adding the --verbose option as often as desired (e.g. '-vvv'
                  for debug).
  -h, -?, --help  Show this message and exit.

  Examples:

    1. Lists all configuration resources (type: bob.med.tb.config) installed:

       $ bob tb config list

    2. Lists all configuration resources and their descriptions (notice this may
       be slow as it needs to load all modules once):

       $ bob tb config list -v

Available Resources

Here is a list of all resources currently exported.

$ bob tb config list -v
module: bob.med.tb.configs.datasets.hivtb
  hivtb_f0       HIV-TB dataset for TB detection (cross validation fold 0)
  hivtb_f0_rgb   HIV-TB dataset for TB detection (cross validation fold 0)
  hivtb_f1       HIV-TB dataset for TB detection (cross validation fold 1)
  hivtb_f1_rgb   HIV-TB dataset for TB detection (cross validation fold 1)
  hivtb_f2       HIV-TB dataset for TB detection (cross validation fold 2)
  hivtb_f2_rgb   HIV-TB dataset for TB detection (cross validation fold 2)
  hivtb_f3       HIV-TB dataset for TB detection (cross validation fold 3)
  hivtb_f3_rgb   HIV-TB dataset for TB detection (cross validation fold 3)
  hivtb_f4       HIV-TB dataset for TB detection (cross validation fold 4)
  hivtb_f4_rgb   HIV-TB dataset for TB detection (cross validation fold 4)
  hivtb_f5       HIV-TB dataset for TB detection (cross validation fold 5)
  hivtb_f5_rgb   HIV-TB dataset for TB detection (cross validation fold 5)
  hivtb_f6       HIV-TB dataset for TB detection (cross validation fold 6)
  hivtb_f6_rgb   HIV-TB dataset for TB detection (cross validation fold 6)
  hivtb_f7       HIV-TB dataset for TB detection (cross validation fold 7)
  hivtb_f7_rgb   HIV-TB dataset for TB detection (cross validation fold 7)
  hivtb_f8       HIV-TB dataset for TB detection (cross validation fold 8)
  hivtb_f8_rgb   HIV-TB dataset for TB detection (cross validation fold 8)
  hivtb_f9       HIV-TB dataset for TB detection (cross validation fold 9)
  hivtb_f9_rgb   HIV-TB dataset for TB detection (cross validation fold 9)
  hivtb_rs_f0    HIV-TB dataset for TB detection (cross validation fold 0)
  hivtb_rs_f1    HIV-TB dataset for TB detection (cross validation fold 1)
  hivtb_rs_f2    HIV-TB dataset for TB detection (cross validation fold 2)
  hivtb_rs_f3    HIV-TB dataset for TB detection (cross validation fold 3)
  hivtb_rs_f4    HIV-TB dataset for TB detection (cross validation fold 4)
  hivtb_rs_f5    HIV-TB dataset for TB detection (cross validation fold 5)
  hivtb_rs_f6    HIV-TB dataset for TB detection (cross validation fold 6)
  hivtb_rs_f7    HIV-TB dataset for TB detection (cross validation fold 7)
  hivtb_rs_f8    HIV-TB dataset for TB detection (cross validation fold 8)
  hivtb_rs_f9    HIV-TB dataset for TB detection (cross validation fold 9)
module: bob.med.tb.configs.datasets.indian
  indian          Indian dataset for TB detection (default protocol)
  indian_f0       Indian dataset for TB detection (cross validation fold 0)
  indian_f0_rgb   Indian dataset for TB detection (cross validation fold 0, R...
  indian_f1       Indian dataset for TB detection (cross validation fold 1)
  indian_f1_rgb   Indian dataset for TB detection (cross validation fold 1, R...
  indian_f2       Indian dataset for TB detection (cross validation fold 2)
  indian_f2_rgb   Indian dataset for TB detection (cross validation fold 2, R...
  indian_f3       Indian dataset for TB detection (cross validation fold 3)
  indian_f3_rgb   Indian dataset for TB detection (cross validation fold 3, R...
  indian_f4       Indian dataset for TB detection (cross validation fold 4)
  indian_f4_rgb   Indian dataset for TB detection (cross validation fold 4, R...
  indian_f5       Indian dataset for TB detection (cross validation fold 5)
  indian_f5_rgb   Indian dataset for TB detection (cross validation fold 5, R...
  indian_f6       Indian dataset for TB detection (cross validation fold 6)
  indian_f6_rgb   Indian dataset for TB detection (cross validation fold 6, R...
  indian_f7       Indian dataset for TB detection (cross validation fold 7)
  indian_f7_rgb   Indian dataset for TB detection (cross validation fold 7, R...
  indian_f8       Indian dataset for TB detection (cross validation fold 8)
  indian_f8_rgb   Indian dataset for TB detection (cross validation fold 8, R...
  indian_f9       Indian dataset for TB detection (cross validation fold 9)
  indian_f9_rgb   Indian dataset for TB detection (cross validation fold 9, R...
  indian_rgb      Indian dataset for TB detection (default protocol, converte...
  indian_rs       Indian dataset for TB detection (default protocol)
(extende...
  indian_rs_f0    Indian dataset for TB detection (cross validation fold 0)
  indian_rs_f1    Indian dataset for TB detection (cross validation fold 1)
  indian_rs_f2    Indian dataset for TB detection (cross validation fold 2)
  indian_rs_f3    Indian dataset for TB detection (cross validation fold 3)
  indian_rs_f4    Indian dataset for TB detection (cross validation fold 4)
  indian_rs_f5    Indian dataset for TB detection (cross validation fold 5)
  indian_rs_f6    Indian dataset for TB detection (cross validation fold 6)
  indian_rs_f7    Indian dataset for TB detection (cross validation fold 7)
  indian_rs_f8    Indian dataset for TB detection (cross validation fold 8)
  indian_rs_f9    Indian dataset for TB detection (cross validation fold 9)
module: bob.med.tb.configs.datasets.mc_ch
  mc_ch             Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f0          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f0_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f1          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f1_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f2          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f2_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f3          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f3_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f4          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f4_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f5          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f5_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f6          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f6_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f7          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f7_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f8          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f8_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f9          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_f9_rgb      Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_in          Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f0       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f0_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f1       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f1_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f2       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f2_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f3       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f3_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f4       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f4_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f5       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f5_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f6       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f6_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f7       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f7_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f8       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f8_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f9       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_f9_rgb   Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_pc       Aggregated dataset composed of Montgomery, Shenzhen, Indi...
  mc_ch_in_pc_rgb   Aggregated dataset composed of Montgomery, Shenzhen, Indi...
  mc_ch_in_pc_rs    Aggregated dataset composed of Montgomery, Shenzhen, Indi...
  mc_ch_in_rgb      Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs       Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f0    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f1    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f2    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f3    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f4    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f5    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f6    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f7    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f8    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_in_rs_f9    Aggregated dataset composed of Montgomery, Shenzhen and I...
  mc_ch_rgb         Aggregated dataset composed of Montgomery and Shenzhen (R...
  mc_ch_rs          Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f0       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f1       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f2       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f3       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f4       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f5       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f6       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f7       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f8       Aggregated dataset composed of Montgomery and Shenzhen da...
  mc_ch_rs_f9       Aggregated dataset composed of Montgomery and Shenzhen da...
module: bob.med.tb.configs.datasets.montgomery
  montgomery          Montgomery dataset for TB detection (default protocol)
  montgomery_f0       Montgomery dataset for TB detection (cross validation f...
  montgomery_f0_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f1       Montgomery dataset for TB detection (cross validation f...
  montgomery_f1_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f2       Montgomery dataset for TB detection (cross validation f...
  montgomery_f2_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f3       Montgomery dataset for TB detection (cross validation f...
  montgomery_f3_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f4       Montgomery dataset for TB detection (cross validation f...
  montgomery_f4_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f5       Montgomery dataset for TB detection (cross validation f...
  montgomery_f5_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f6       Montgomery dataset for TB detection (cross validation f...
  montgomery_f6_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f7       Montgomery dataset for TB detection (cross validation f...
  montgomery_f7_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f8       Montgomery dataset for TB detection (cross validation f...
  montgomery_f8_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_f9       Montgomery dataset for TB detection (cross validation f...
  montgomery_f9_rgb   Montgomery dataset for TB detection (cross validation f...
  montgomery_rgb      Montgomery dataset for TB detection (default protocol, ...
  montgomery_rs       Montgomery dataset for TB detection (default protocol)
...
  montgomery_rs_f0    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f1    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f2    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f3    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f4    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f5    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f6    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f7    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f8    Montgomery dataset for TB detection (cross validation f...
  montgomery_rs_f9    Montgomery dataset for TB detection (cross validation f...
module: bob.med.tb.configs.datasets.nih_cxr14_re
  nih_cxr14            NIH CXR14 (relabeled) dataset for computer-aided diagn...
  nih_cxr14_cm_idiap   NIH CXR14 (relabeled, idiap protocol) dataset for comp...
  nih_cxr14_idiap      NIH CXR14 (relabeled, idiap protocol) dataset for comp...
  nih_cxr14_pc_idiap   Aggregated dataset composed of NIH CXR14 relabeld and ...
module: bob.med.tb.configs.datasets.padchest
  padchest_cm_idiap       Padchest cardiomegaly (idiap protocol) dataset for ...
  padchest_idiap          Padchest (idiap protocol) dataset for computer-aide...
  padchest_no_tb_idiap    Padchest tuberculosis (no TB idiap protocol) datase...
  padchest_tb_idiap       Padchest tuberculosis (idiap protocol) dataset for ...
  padchest_tb_idiap_rgb   Padchest tuberculosis (idiap protocol, rgb) dataset...
  padchest_tb_idiap_rs    Extended Padchest TB dataset for TB detection (defa...
module: bob.med.tb.configs.datasets.shenzhen
  shenzhen          Shenzhen dataset for TB detection (default protocol)
  shenzhen_f0       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f0_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f1       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f1_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f2       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f2_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f3       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f3_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f4       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f4_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f5       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f5_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f6       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f6_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f7       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f7_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f8       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f8_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f9       Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_f9_rgb   Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rgb      Shenzhen dataset for TB detection (default protocol, conv...
  shenzhen_rs       Shenzhen dataset for TB detection (default protocol)
(ext...
  shenzhen_rs_f0    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f1    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f2    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f3    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f4    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f5    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f6    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f7    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f8    Shenzhen dataset for TB detection (cross validation fold ...
  shenzhen_rs_f9    Shenzhen dataset for TB detection (cross validation fold ...
module: bob.med.tb.configs.datasets.tbpoc
  tbpoc_f0       TB-POC dataset for TB detection (cross validation fold 0)
  tbpoc_f0_rgb   TB-POC dataset for TB detection (cross validation fold 0)
  tbpoc_f1       TB-POC dataset for TB detection (cross validation fold 1)
  tbpoc_f1_rgb   TB-POC dataset for TB detection (cross validation fold 1)
  tbpoc_f2       TB-POC dataset for TB detection (cross validation fold 2)
  tbpoc_f2_rgb   TB-POC dataset for TB detection (cross validation fold 2)
  tbpoc_f3       TB-POC dataset for TB detection (cross validation fold 3)
  tbpoc_f3_rgb   TB-POC dataset for TB detection (cross validation fold 3)
  tbpoc_f4       TB-POC dataset for TB detection (cross validation fold 4)
  tbpoc_f4_rgb   TB-POC dataset for TB detection (cross validation fold 4)
  tbpoc_f5       TB-POC dataset for TB detection (cross validation fold 5)
  tbpoc_f5_rgb   TB-POC dataset for TB detection (cross validation fold 5)
  tbpoc_f6       TB-POC dataset for TB detection (cross validation fold 6)
  tbpoc_f6_rgb   TB-POC dataset for TB detection (cross validation fold 6)
  tbpoc_f7       TB-POC dataset for TB detection (cross validation fold 7)
  tbpoc_f7_rgb   TB-POC dataset for TB detection (cross validation fold 7)
  tbpoc_f8       TB-POC dataset for TB detection (cross validation fold 8)
  tbpoc_f8_rgb   TB-POC dataset for TB detection (cross validation fold 8)
  tbpoc_f9       TB-POC dataset for TB detection (cross validation fold 9)
  tbpoc_f9_rgb   TB-POC dataset for TB detection (cross validation fold 9)
  tbpoc_rs_f0    TB-POC dataset for TB detection (cross validation fold 0)
  tbpoc_rs_f1    TB-POC dataset for TB detection (cross validation fold 1)
  tbpoc_rs_f2    TB-POC dataset for TB detection (cross validation fold 2)
  tbpoc_rs_f3    TB-POC dataset for TB detection (cross validation fold 3)
  tbpoc_rs_f4    TB-POC dataset for TB detection (cross validation fold 4)
  tbpoc_rs_f5    TB-POC dataset for TB detection (cross validation fold 5)
  tbpoc_rs_f6    TB-POC dataset for TB detection (cross validation fold 6)
  tbpoc_rs_f7    TB-POC dataset for TB detection (cross validation fold 7)
  tbpoc_rs_f8    TB-POC dataset for TB detection (cross validation fold 8)
  tbpoc_rs_f9    TB-POC dataset for TB detection (cross validation fold 9)
module: bob.med.tb.configs.models
  alexnet               AlexNet
  alexnet_pre           AlexNet
  densenet              DenseNet
  densenet_pre          DenseNet
  densenet_rs           CNN for radiological findings detection
  logistic_regression   Feedforward network for Tuberculosis Detection
  pasa                  CNN for Tuberculosis Detection
  signs_to_tb           Feedforward network for Tuberculosis Detection

Describing a Resource

$ bob tb config describe --help
Usage: bob tb config describe [OPTIONS] NAME...

  Describes a specific configuration file

Options:
  -v, --verbose   Increase the verbosity level from 0 (only error messages) to
                  1 (warnings), 2 (log messages), 3 (debug information) by
                  adding the --verbose option as often as desired (e.g. '-vvv'
                  for debug).
  -h, -?, --help  Show this message and exit.

  Examples:

    1. Describes the Montgomery dataset configuration:

       $ bob tb config describe montgomery

    2. Describes the Montgomery dataset configuration and lists its
       contents:

       $ bob tb config describe montgomery -v

Applications for experiments

These applications allow to run every step of the experiment cycle. They also work well with our preset configuration resources.

Training CNNs or shallow networks

Training creates of a new PyTorch model. This model can be used for inference.

$ bob tb train --help
Usage: bob tb train [OPTIONS] [CONFIG]...

  Trains an CNN to perform tuberculosis detection

      Training is performed for a configurable number of epochs, and generates
      at     least a final_model.pth.  It may also generate a number of
      intermediate     checkpoints.  Checkpoints are model files (.pth files)
      that are stored     during the training and useful to resume the
      procedure in case it stops     abruptly.

  It is possible to pass one or several Python files (or names of
  ``bob.med.tb.config`` entry points or module names i.e. import paths) as
  CONFIG arguments to this command line which contain the parameters listed
  below as Python variables. Available entry points are:

  **bob.med.tb** entry points are: alexnet, alexnet_pre, densenet,
  densenet_pre, densenet_rs, hivtb_f0, hivtb_f0_rgb, hivtb_f1, hivtb_f1_rgb,
  hivtb_f2, hivtb_f2_rgb, hivtb_f3, hivtb_f3_rgb, hivtb_f4, hivtb_f4_rgb,
  hivtb_f5, hivtb_f5_rgb, hivtb_f6, hivtb_f6_rgb, hivtb_f7, hivtb_f7_rgb,
  hivtb_f8, hivtb_f8_rgb, hivtb_f9, hivtb_f9_rgb, hivtb_rs_f0, hivtb_rs_f1,
  hivtb_rs_f2, hivtb_rs_f3, hivtb_rs_f4, hivtb_rs_f5, hivtb_rs_f6,
  hivtb_rs_f7, hivtb_rs_f8, hivtb_rs_f9, indian, indian_f0, indian_f0_rgb,
  indian_f1, indian_f1_rgb, indian_f2, indian_f2_rgb, indian_f3,
  indian_f3_rgb, indian_f4, indian_f4_rgb, indian_f5, indian_f5_rgb,
  indian_f6, indian_f6_rgb, indian_f7, indian_f7_rgb, indian_f8,
  indian_f8_rgb, indian_f9, indian_f9_rgb, indian_rgb, indian_rs,
  indian_rs_f0, indian_rs_f1, indian_rs_f2, indian_rs_f3, indian_rs_f4,
  indian_rs_f5, indian_rs_f6, indian_rs_f7, indian_rs_f8, indian_rs_f9,
  logistic_regression, mc_ch, mc_ch_f0, mc_ch_f0_rgb, mc_ch_f1, mc_ch_f1_rgb,
  mc_ch_f2, mc_ch_f2_rgb, mc_ch_f3, mc_ch_f3_rgb, mc_ch_f4, mc_ch_f4_rgb,
  mc_ch_f5, mc_ch_f5_rgb, mc_ch_f6, mc_ch_f6_rgb, mc_ch_f7, mc_ch_f7_rgb,
  mc_ch_f8, mc_ch_f8_rgb, mc_ch_f9, mc_ch_f9_rgb, mc_ch_in, mc_ch_in_f0,
  mc_ch_in_f0_rgb, mc_ch_in_f1, mc_ch_in_f1_rgb, mc_ch_in_f2, mc_ch_in_f2_rgb,
  mc_ch_in_f3, mc_ch_in_f3_rgb, mc_ch_in_f4, mc_ch_in_f4_rgb, mc_ch_in_f5,
  mc_ch_in_f5_rgb, mc_ch_in_f6, mc_ch_in_f6_rgb, mc_ch_in_f7, mc_ch_in_f7_rgb,
  mc_ch_in_f8, mc_ch_in_f8_rgb, mc_ch_in_f9, mc_ch_in_f9_rgb, mc_ch_in_pc,
  mc_ch_in_pc_rgb, mc_ch_in_pc_rs, mc_ch_in_rgb, mc_ch_in_rs, mc_ch_in_rs_f0,
  mc_ch_in_rs_f1, mc_ch_in_rs_f2, mc_ch_in_rs_f3, mc_ch_in_rs_f4,
  mc_ch_in_rs_f5, mc_ch_in_rs_f6, mc_ch_in_rs_f7, mc_ch_in_rs_f8,
  mc_ch_in_rs_f9, mc_ch_rgb, mc_ch_rs, mc_ch_rs_f0, mc_ch_rs_f1, mc_ch_rs_f2,
  mc_ch_rs_f3, mc_ch_rs_f4, mc_ch_rs_f5, mc_ch_rs_f6, mc_ch_rs_f7,
  mc_ch_rs_f8, mc_ch_rs_f9, montgomery, montgomery_f0, montgomery_f0_rgb,
  montgomery_f1, montgomery_f1_rgb, montgomery_f2, montgomery_f2_rgb,
  montgomery_f3, montgomery_f3_rgb, montgomery_f4, montgomery_f4_rgb,
  montgomery_f5, montgomery_f5_rgb, montgomery_f6, montgomery_f6_rgb,
  montgomery_f7, montgomery_f7_rgb, montgomery_f8, montgomery_f8_rgb,
  montgomery_f9, montgomery_f9_rgb, montgomery_rgb, montgomery_rs,
  montgomery_rs_f0, montgomery_rs_f1, montgomery_rs_f2, montgomery_rs_f3,
  montgomery_rs_f4, montgomery_rs_f5, montgomery_rs_f6, montgomery_rs_f7,
  montgomery_rs_f8, montgomery_rs_f9, nih_cxr14, nih_cxr14_cm_idiap,
  nih_cxr14_idiap, nih_cxr14_pc_idiap, padchest_cm_idiap, padchest_idiap,
  padchest_no_tb_idiap, padchest_tb_idiap, padchest_tb_idiap_rgb,
  padchest_tb_idiap_rs, pasa, shenzhen, shenzhen_f0, shenzhen_f0_rgb,
  shenzhen_f1, shenzhen_f1_rgb, shenzhen_f2, shenzhen_f2_rgb, shenzhen_f3,
  shenzhen_f3_rgb, shenzhen_f4, shenzhen_f4_rgb, shenzhen_f5, shenzhen_f5_rgb,
  shenzhen_f6, shenzhen_f6_rgb, shenzhen_f7, shenzhen_f7_rgb, shenzhen_f8,
  shenzhen_f8_rgb, shenzhen_f9, shenzhen_f9_rgb, shenzhen_rgb, shenzhen_rs,
  shenzhen_rs_f0, shenzhen_rs_f1, shenzhen_rs_f2, shenzhen_rs_f3,
  shenzhen_rs_f4, shenzhen_rs_f5, shenzhen_rs_f6, shenzhen_rs_f7,
  shenzhen_rs_f8, shenzhen_rs_f9, signs_to_tb, tbpoc_f0, tbpoc_f0_rgb,
  tbpoc_f1, tbpoc_f1_rgb, tbpoc_f2, tbpoc_f2_rgb, tbpoc_f3, tbpoc_f3_rgb,
  tbpoc_f4, tbpoc_f4_rgb, tbpoc_f5, tbpoc_f5_rgb, tbpoc_f6, tbpoc_f6_rgb,
  tbpoc_f7, tbpoc_f7_rgb, tbpoc_f8, tbpoc_f8_rgb, tbpoc_f9, tbpoc_f9_rgb,
  tbpoc_rs_f0, tbpoc_rs_f1, tbpoc_rs_f2, tbpoc_rs_f3, tbpoc_rs_f4,
  tbpoc_rs_f5, tbpoc_rs_f6, tbpoc_rs_f7, tbpoc_rs_f8, tbpoc_rs_f9

  The options through the command-line (see below) will override the values of
  argument provided configuration files. You can run this command with
  ``<COMMAND> -H example_config.py`` to create a template config file.

Options:
  -o, --output-folder PATH        Path where to store the generated model
                                  (created if does not exist)  [required]
  -m, --model CUSTOM              A torch.nn.Module instance implementing the
                                  network to be trained  [required]
  -d, --dataset CUSTOM            A dictionary mapping string keys to
                                  torch.utils.data.dataset.Dataset instances
                                  implementing datasets to be used for
                                  training and validating the model, possibly
                                  including all pre-processing pipelines
                                  required or, optionally, a dictionary
                                  mapping string keys to
                                  torch.utils.data.dataset.Dataset instances.
                                  At least one key named ``train`` must be
                                  available.  This dataset will be used for
                                  training the network model.  The dataset
                                  description must include all required pre-
                                  processing, including eventual data
                                  augmentation.  If a dataset named
                                  ``__train__`` is available, it is used
                                  prioritarily for training instead of
                                  ``train``.  If a dataset named ``__valid__``
                                  is available, it is used for model
                                  validation (and automatic check-pointing) at
                                  each epoch.  If a dataset list named
                                  ``__extra_valid__`` is available, then it
                                  will be tracked during the validation
                                  process and its loss output at the training
                                  log as well, in the format of an array
                                  occupying a single column.  All other keys
                                  are considered test datasets and are ignored
                                  during training  [required]
  --optimizer CUSTOM              A torch.optim.Optimizer that will be used to
                                  train the network  [required]
  --criterion CUSTOM              A loss function to compute the CNN error for
                                  every sample respecting the PyTorch API for
                                  loss functions (see torch.nn.modules.loss)
                                  [required]
  --criterion-valid CUSTOM        A specific loss function for the validation
                                  set to compute the CNNerror for every sample
                                  respecting the PyTorch API for loss
                                  functions(see torch.nn.modules.loss)
  -b, --batch-size INTEGER RANGE  Number of samples in every batch (this
                                  parameter affects memory requirements for
                                  the network).  If the number of samples in
                                  the batch is larger than the total number of
                                  samples available for training, this value
                                  is truncated.  If this number is smaller,
                                  then batches of the specified size are
                                  created and fed to the network until there
                                  are no more new samples to feed (epoch is
                                  finished).  If the total number of training
                                  samples is not a multiple of the batch-size,
                                  the last batch will be smaller than the
                                  first, unless --drop-incomplete-batch is
                                  set, in which case this batch is not used.
                                  [default: 1; x>=1; required]
  -c, --batch-chunk-count INTEGER RANGE
                                  Number of chunks in every batch (this
                                  parameter affects memory requirements for
                                  the network). The number of samples loaded
                                  for every iteration will be batch-
                                  size/batch-chunk-count. batch-size needs to
                                  be divisible by batch-chunk-count, otherwise
                                  an error will be raised. This parameter is
                                  used to reduce number of samples loaded in
                                  each iteration, in order to reduce the
                                  memory usage in exchange for processing time
                                  (more iterations).  This is specially
                                  interesting whe one is running with GPUs
                                  with limited RAM. The default of 1 forces
                                  the whole batch to be processed at once.
                                  Otherwise the batch is broken into batch-
                                  chunk-count pieces, and gradients are
                                  accumulated to complete each batch.
                                  [default: 1; x>=1; required]
  -D, --drop-incomplete-batch / --no-drop-incomplete-batch
                                  If set, then may drop the last batch in an
                                  epoch, in case it is incomplete.  If you set
                                  this option, you should also consider
                                  increasing the total number of epochs of
                                  training, as the total number of training
                                  steps may be reduced  [default: no-drop-
                                  incomplete-batch; required]
  -e, --epochs INTEGER RANGE      Number of epochs (complete training set
                                  passes) to train for. If continuing from a
                                  saved checkpoint, ensure to provide a
                                  greater number of epochs than that saved on
                                  the checkpoint to be loaded.   [default:
                                  1000; x>=1; required]
  -p, --checkpoint-period INTEGER RANGE
                                  Number of epochs after which a checkpoint is
                                  saved. A value of zero will disable check-
                                  pointing. If checkpointing is enabled and
                                  training stops, it is automatically resumed
                                  from the last saved checkpoint if training
                                  is restarted with the same configuration.
                                  [default: 0; x>=0; required]
  -d, --device TEXT               A string indicating the device to use (e.g.
                                  "cpu" or "cuda:0")  [default: cpu; required]
  -s, --seed INTEGER RANGE        Seed to use for the random number generator
                                  [default: 42; x>=0]
  -P, --parallel INTEGER RANGE    Use multiprocessing for data loading: if set
                                  to -1 (default), disables multiprocessing
                                  data loading.  Set to 0 to enable as many
                                  data loading instances as processing cores
                                  as available in the system.  Set to >= 1 to
                                  enable that many multiprocessing instances
                                  for data loading.  [default: -1; x>=-1;
                                  required]
  -w, --weight CUSTOM             Path or URL to pretrained model file (.pth
                                  extension)
  -n, --normalization TEXT        Z-Normalization of input images: 'imagenet'
                                  for ImageNet parameters, 'current' for
                                  parameters of the current trainset, 'none'
                                  for no normalization.
  -I, --monitoring-interval FLOAT RANGE
                                  Time between checks for the use of resources
                                  during each training epoch.  An interval of
                                  5 seconds, for example, will lead to CPU and
                                  GPU resources being probed every 5 seconds
                                  during each training epoch. Values
                                  registered in the training logs correspond
                                  to averages (or maxima) observed through
                                  possibly many probes in each epoch.  Notice
                                  that setting a very small value may cause
                                  the probing process to become extremely
                                  busy, potentially biasing the overall
                                  perception of resource usage.  [default:
                                  5.0; x>=0.1; required]
  -v, --verbose                   Increase the verbosity level from 0 (only
                                  error messages) to 1 (warnings), 2 (log
                                  messages), 3 (debug information) by adding
                                  the --verbose option as often as desired
                                  (e.g. '-vvv' for debug).
  -H, --dump-config FILENAME      Name of the config file to be generated
  -?, -h, --help                  Show this message and exit.

  Examples:

      1. Trains PASA model with Montgomery dataset,
         on a GPU (``cuda:0``):

         $ bob tb train -vv pasa montgomery --batch-size=4 --device="cuda:0"

Prediction with CNNs or shallow networks

Inference takes as input a PyTorch model and generates output probabilities. The generated csv file indicates from 0 to 1 (floating-point number), the probability of TB presence on a chest X-ray, from less probable (0.0) to more probable (1.0).

$ bob tb predict --help
Usage: bob tb predict [OPTIONS] [CONFIG]...

  Predicts Tuberculosis presence (probabilities) on input images

  It is possible to pass one or several Python files (or names of
  ``bob.med.tb.config`` entry points or module names i.e. import paths) as
  CONFIG arguments to this command line which contain the parameters listed
  below as Python variables. Available entry points are:

  **bob.med.tb** entry points are: alexnet, alexnet_pre, densenet,
  densenet_pre, densenet_rs, hivtb_f0, hivtb_f0_rgb, hivtb_f1, hivtb_f1_rgb,
  hivtb_f2, hivtb_f2_rgb, hivtb_f3, hivtb_f3_rgb, hivtb_f4, hivtb_f4_rgb,
  hivtb_f5, hivtb_f5_rgb, hivtb_f6, hivtb_f6_rgb, hivtb_f7, hivtb_f7_rgb,
  hivtb_f8, hivtb_f8_rgb, hivtb_f9, hivtb_f9_rgb, hivtb_rs_f0, hivtb_rs_f1,
  hivtb_rs_f2, hivtb_rs_f3, hivtb_rs_f4, hivtb_rs_f5, hivtb_rs_f6,
  hivtb_rs_f7, hivtb_rs_f8, hivtb_rs_f9, indian, indian_f0, indian_f0_rgb,
  indian_f1, indian_f1_rgb, indian_f2, indian_f2_rgb, indian_f3,
  indian_f3_rgb, indian_f4, indian_f4_rgb, indian_f5, indian_f5_rgb,
  indian_f6, indian_f6_rgb, indian_f7, indian_f7_rgb, indian_f8,
  indian_f8_rgb, indian_f9, indian_f9_rgb, indian_rgb, indian_rs,
  indian_rs_f0, indian_rs_f1, indian_rs_f2, indian_rs_f3, indian_rs_f4,
  indian_rs_f5, indian_rs_f6, indian_rs_f7, indian_rs_f8, indian_rs_f9,
  logistic_regression, mc_ch, mc_ch_f0, mc_ch_f0_rgb, mc_ch_f1, mc_ch_f1_rgb,
  mc_ch_f2, mc_ch_f2_rgb, mc_ch_f3, mc_ch_f3_rgb, mc_ch_f4, mc_ch_f4_rgb,
  mc_ch_f5, mc_ch_f5_rgb, mc_ch_f6, mc_ch_f6_rgb, mc_ch_f7, mc_ch_f7_rgb,
  mc_ch_f8, mc_ch_f8_rgb, mc_ch_f9, mc_ch_f9_rgb, mc_ch_in, mc_ch_in_f0,
  mc_ch_in_f0_rgb, mc_ch_in_f1, mc_ch_in_f1_rgb, mc_ch_in_f2, mc_ch_in_f2_rgb,
  mc_ch_in_f3, mc_ch_in_f3_rgb, mc_ch_in_f4, mc_ch_in_f4_rgb, mc_ch_in_f5,
  mc_ch_in_f5_rgb, mc_ch_in_f6, mc_ch_in_f6_rgb, mc_ch_in_f7, mc_ch_in_f7_rgb,
  mc_ch_in_f8, mc_ch_in_f8_rgb, mc_ch_in_f9, mc_ch_in_f9_rgb, mc_ch_in_pc,
  mc_ch_in_pc_rgb, mc_ch_in_pc_rs, mc_ch_in_rgb, mc_ch_in_rs, mc_ch_in_rs_f0,
  mc_ch_in_rs_f1, mc_ch_in_rs_f2, mc_ch_in_rs_f3, mc_ch_in_rs_f4,
  mc_ch_in_rs_f5, mc_ch_in_rs_f6, mc_ch_in_rs_f7, mc_ch_in_rs_f8,
  mc_ch_in_rs_f9, mc_ch_rgb, mc_ch_rs, mc_ch_rs_f0, mc_ch_rs_f1, mc_ch_rs_f2,
  mc_ch_rs_f3, mc_ch_rs_f4, mc_ch_rs_f5, mc_ch_rs_f6, mc_ch_rs_f7,
  mc_ch_rs_f8, mc_ch_rs_f9, montgomery, montgomery_f0, montgomery_f0_rgb,
  montgomery_f1, montgomery_f1_rgb, montgomery_f2, montgomery_f2_rgb,
  montgomery_f3, montgomery_f3_rgb, montgomery_f4, montgomery_f4_rgb,
  montgomery_f5, montgomery_f5_rgb, montgomery_f6, montgomery_f6_rgb,
  montgomery_f7, montgomery_f7_rgb, montgomery_f8, montgomery_f8_rgb,
  montgomery_f9, montgomery_f9_rgb, montgomery_rgb, montgomery_rs,
  montgomery_rs_f0, montgomery_rs_f1, montgomery_rs_f2, montgomery_rs_f3,
  montgomery_rs_f4, montgomery_rs_f5, montgomery_rs_f6, montgomery_rs_f7,
  montgomery_rs_f8, montgomery_rs_f9, nih_cxr14, nih_cxr14_cm_idiap,
  nih_cxr14_idiap, nih_cxr14_pc_idiap, padchest_cm_idiap, padchest_idiap,
  padchest_no_tb_idiap, padchest_tb_idiap, padchest_tb_idiap_rgb,
  padchest_tb_idiap_rs, pasa, shenzhen, shenzhen_f0, shenzhen_f0_rgb,
  shenzhen_f1, shenzhen_f1_rgb, shenzhen_f2, shenzhen_f2_rgb, shenzhen_f3,
  shenzhen_f3_rgb, shenzhen_f4, shenzhen_f4_rgb, shenzhen_f5, shenzhen_f5_rgb,
  shenzhen_f6, shenzhen_f6_rgb, shenzhen_f7, shenzhen_f7_rgb, shenzhen_f8,
  shenzhen_f8_rgb, shenzhen_f9, shenzhen_f9_rgb, shenzhen_rgb, shenzhen_rs,
  shenzhen_rs_f0, shenzhen_rs_f1, shenzhen_rs_f2, shenzhen_rs_f3,
  shenzhen_rs_f4, shenzhen_rs_f5, shenzhen_rs_f6, shenzhen_rs_f7,
  shenzhen_rs_f8, shenzhen_rs_f9, signs_to_tb, tbpoc_f0, tbpoc_f0_rgb,
  tbpoc_f1, tbpoc_f1_rgb, tbpoc_f2, tbpoc_f2_rgb, tbpoc_f3, tbpoc_f3_rgb,
  tbpoc_f4, tbpoc_f4_rgb, tbpoc_f5, tbpoc_f5_rgb, tbpoc_f6, tbpoc_f6_rgb,
  tbpoc_f7, tbpoc_f7_rgb, tbpoc_f8, tbpoc_f8_rgb, tbpoc_f9, tbpoc_f9_rgb,
  tbpoc_rs_f0, tbpoc_rs_f1, tbpoc_rs_f2, tbpoc_rs_f3, tbpoc_rs_f4,
  tbpoc_rs_f5, tbpoc_rs_f6, tbpoc_rs_f7, tbpoc_rs_f8, tbpoc_rs_f9

  The options through the command-line (see below) will override the values of
  argument provided configuration files. You can run this command with
  ``<COMMAND> -H example_config.py`` to create a template config file.

Options:
  -o, --output-folder PATH        Path where to store the predictions (created
                                  if does not exist)  [required]
  -m, --model CUSTOM              A torch.nn.Module instance implementing the
                                  network to be evaluated  [required]
  -d, --dataset CUSTOM            A torch.utils.data.dataset.Dataset instance
                                  implementing a dataset to be used for
                                  running prediction, possibly including all
                                  pre-processing pipelines required or,
                                  optionally, a dictionary mapping string keys
                                  to torch.utils.data.dataset.Dataset
                                  instances.  All keys that do not start with
                                  an underscore (_) will be processed.
                                  [required]
  -b, --batch-size INTEGER RANGE  Number of samples in every batch (this
                                  parameter affects memory requirements for
                                  the network)  [default: 1; x>=1; required]
  -d, --device TEXT               A string indicating the device to use (e.g.
                                  "cpu" or "cuda:0")  [default: cpu; required]
  -w, --weight CUSTOM             Path or URL to pretrained model file (.pth
                                  extension)  [required]
  -r, --relevance-analysis        If set, generate relevance analysis pdfs to
                                  indicate the relativeimportance of each
                                  feature
  -g, --grad-cams                 If set, generate grad cams for each
                                  prediction (must use batch of 1)
  -v, --verbose                   Increase the verbosity level from 0 (only
                                  error messages) to 1 (warnings), 2 (log
                                  messages), 3 (debug information) by adding
                                  the --verbose option as often as desired
                                  (e.g. '-vvv' for debug).
  -H, --dump-config FILENAME      Name of the config file to be generated
  -h, -?, --help                  Show this message and exit.

  Examples:

      1. Runs prediction on an existing dataset configuration:
  
         $ bob tb predict -vv pasa montgomery --weight=path/to/model_final.pth --output-folder=path/to/predictions

CNN Performance Evaluation

Evaluation takes inference results and compares it to ground-truth, generating measure files and score tables which are useful to understand model performance.

$ bob tb evaluate --help
Usage: bob tb evaluate [OPTIONS] [CONFIG]...

  Evaluates a CNN on a tuberculosis prediction task.

      Note: batch size of 1 is required on the predictions.

  It is possible to pass one or several Python files (or names of
  ``bob.med.tb.config`` entry points or module names i.e. import paths) as
  CONFIG arguments to this command line which contain the parameters listed
  below as Python variables. Available entry points are:

  **bob.med.tb** entry points are: alexnet, alexnet_pre, densenet,
  densenet_pre, densenet_rs, hivtb_f0, hivtb_f0_rgb, hivtb_f1, hivtb_f1_rgb,
  hivtb_f2, hivtb_f2_rgb, hivtb_f3, hivtb_f3_rgb, hivtb_f4, hivtb_f4_rgb,
  hivtb_f5, hivtb_f5_rgb, hivtb_f6, hivtb_f6_rgb, hivtb_f7, hivtb_f7_rgb,
  hivtb_f8, hivtb_f8_rgb, hivtb_f9, hivtb_f9_rgb, hivtb_rs_f0, hivtb_rs_f1,
  hivtb_rs_f2, hivtb_rs_f3, hivtb_rs_f4, hivtb_rs_f5, hivtb_rs_f6,
  hivtb_rs_f7, hivtb_rs_f8, hivtb_rs_f9, indian, indian_f0, indian_f0_rgb,
  indian_f1, indian_f1_rgb, indian_f2, indian_f2_rgb, indian_f3,
  indian_f3_rgb, indian_f4, indian_f4_rgb, indian_f5, indian_f5_rgb,
  indian_f6, indian_f6_rgb, indian_f7, indian_f7_rgb, indian_f8,
  indian_f8_rgb, indian_f9, indian_f9_rgb, indian_rgb, indian_rs,
  indian_rs_f0, indian_rs_f1, indian_rs_f2, indian_rs_f3, indian_rs_f4,
  indian_rs_f5, indian_rs_f6, indian_rs_f7, indian_rs_f8, indian_rs_f9,
  logistic_regression, mc_ch, mc_ch_f0, mc_ch_f0_rgb, mc_ch_f1, mc_ch_f1_rgb,
  mc_ch_f2, mc_ch_f2_rgb, mc_ch_f3, mc_ch_f3_rgb, mc_ch_f4, mc_ch_f4_rgb,
  mc_ch_f5, mc_ch_f5_rgb, mc_ch_f6, mc_ch_f6_rgb, mc_ch_f7, mc_ch_f7_rgb,
  mc_ch_f8, mc_ch_f8_rgb, mc_ch_f9, mc_ch_f9_rgb, mc_ch_in, mc_ch_in_f0,
  mc_ch_in_f0_rgb, mc_ch_in_f1, mc_ch_in_f1_rgb, mc_ch_in_f2, mc_ch_in_f2_rgb,
  mc_ch_in_f3, mc_ch_in_f3_rgb, mc_ch_in_f4, mc_ch_in_f4_rgb, mc_ch_in_f5,
  mc_ch_in_f5_rgb, mc_ch_in_f6, mc_ch_in_f6_rgb, mc_ch_in_f7, mc_ch_in_f7_rgb,
  mc_ch_in_f8, mc_ch_in_f8_rgb, mc_ch_in_f9, mc_ch_in_f9_rgb, mc_ch_in_pc,
  mc_ch_in_pc_rgb, mc_ch_in_pc_rs, mc_ch_in_rgb, mc_ch_in_rs, mc_ch_in_rs_f0,
  mc_ch_in_rs_f1, mc_ch_in_rs_f2, mc_ch_in_rs_f3, mc_ch_in_rs_f4,
  mc_ch_in_rs_f5, mc_ch_in_rs_f6, mc_ch_in_rs_f7, mc_ch_in_rs_f8,
  mc_ch_in_rs_f9, mc_ch_rgb, mc_ch_rs, mc_ch_rs_f0, mc_ch_rs_f1, mc_ch_rs_f2,
  mc_ch_rs_f3, mc_ch_rs_f4, mc_ch_rs_f5, mc_ch_rs_f6, mc_ch_rs_f7,
  mc_ch_rs_f8, mc_ch_rs_f9, montgomery, montgomery_f0, montgomery_f0_rgb,
  montgomery_f1, montgomery_f1_rgb, montgomery_f2, montgomery_f2_rgb,
  montgomery_f3, montgomery_f3_rgb, montgomery_f4, montgomery_f4_rgb,
  montgomery_f5, montgomery_f5_rgb, montgomery_f6, montgomery_f6_rgb,
  montgomery_f7, montgomery_f7_rgb, montgomery_f8, montgomery_f8_rgb,
  montgomery_f9, montgomery_f9_rgb, montgomery_rgb, montgomery_rs,
  montgomery_rs_f0, montgomery_rs_f1, montgomery_rs_f2, montgomery_rs_f3,
  montgomery_rs_f4, montgomery_rs_f5, montgomery_rs_f6, montgomery_rs_f7,
  montgomery_rs_f8, montgomery_rs_f9, nih_cxr14, nih_cxr14_cm_idiap,
  nih_cxr14_idiap, nih_cxr14_pc_idiap, padchest_cm_idiap, padchest_idiap,
  padchest_no_tb_idiap, padchest_tb_idiap, padchest_tb_idiap_rgb,
  padchest_tb_idiap_rs, pasa, shenzhen, shenzhen_f0, shenzhen_f0_rgb,
  shenzhen_f1, shenzhen_f1_rgb, shenzhen_f2, shenzhen_f2_rgb, shenzhen_f3,
  shenzhen_f3_rgb, shenzhen_f4, shenzhen_f4_rgb, shenzhen_f5, shenzhen_f5_rgb,
  shenzhen_f6, shenzhen_f6_rgb, shenzhen_f7, shenzhen_f7_rgb, shenzhen_f8,
  shenzhen_f8_rgb, shenzhen_f9, shenzhen_f9_rgb, shenzhen_rgb, shenzhen_rs,
  shenzhen_rs_f0, shenzhen_rs_f1, shenzhen_rs_f2, shenzhen_rs_f3,
  shenzhen_rs_f4, shenzhen_rs_f5, shenzhen_rs_f6, shenzhen_rs_f7,
  shenzhen_rs_f8, shenzhen_rs_f9, signs_to_tb, tbpoc_f0, tbpoc_f0_rgb,
  tbpoc_f1, tbpoc_f1_rgb, tbpoc_f2, tbpoc_f2_rgb, tbpoc_f3, tbpoc_f3_rgb,
  tbpoc_f4, tbpoc_f4_rgb, tbpoc_f5, tbpoc_f5_rgb, tbpoc_f6, tbpoc_f6_rgb,
  tbpoc_f7, tbpoc_f7_rgb, tbpoc_f8, tbpoc_f8_rgb, tbpoc_f9, tbpoc_f9_rgb,
  tbpoc_rs_f0, tbpoc_rs_f1, tbpoc_rs_f2, tbpoc_rs_f3, tbpoc_rs_f4,
  tbpoc_rs_f5, tbpoc_rs_f6, tbpoc_rs_f7, tbpoc_rs_f8, tbpoc_rs_f9

  The options through the command-line (see below) will override the values of
  argument provided configuration files. You can run this command with
  ``<COMMAND> -H example_config.py`` to create a template config file.

Options:
  -o, --output-folder PATH        Path where to store the analysis result
                                  (created if does not exist)  [required]
  -p, --predictions-folder DIRECTORY
                                  Path where predictions are currently stored
                                  [required]
  -d, --dataset CUSTOM            A torch.utils.data.dataset.Dataset instance
                                  implementing a dataset to be used for
                                  evaluation purposes, possibly including all
                                  pre-processing pipelines required or,
                                  optionally, a dictionary mapping string keys
                                  to torch.utils.data.dataset.Dataset
                                  instances.  All keys that do not start with
                                  an underscore (_) will be processed.
                                  [required]
  -t, --threshold CUSTOM          This number is used to define positives and
                                  negatives from probability maps, and report
                                  F1-scores (a priori). It should either come
                                  from the training set or a separate
                                  validation set to avoid biasing the
                                  analysis.  Optionally, if you provide a
                                  multi-set dataset as input, this may also be
                                  the name of an existing set from which the
                                  threshold will be estimated (highest
                                  F1-score) and then applied to the subsequent
                                  sets.  This number is also used to print the
                                  test set F1-score a priori performance
  -S, --steps INTEGER             This number is used to define the number of
                                  threshold steps to consider when evaluating
                                  the highest possible F1-score on test data.
                                  [default: 1000; required]
  -v, --verbose                   Increase the verbosity level from 0 (only
                                  error messages) to 1 (warnings), 2 (log
                                  messages), 3 (debug information) by adding
                                  the --verbose option as often as desired
                                  (e.g. '-vvv' for debug).
  -H, --dump-config FILENAME      Name of the config file to be generated
  -h, -?, --help                  Show this message and exit.

  Examples:

      1. Runs evaluation on an existing dataset configuration:
  
         $ bob tb evaluate -vv montgomery --predictions-folder=path/to/predictions --output-folder=path/to/results

Performance Comparison

Performance comparison takes the prediction results and generate combined figures and tables that compare results of multiple systems.

$ bob tb compare --help
Usage: bob tb compare [OPTIONS] [LABEL_PATH]...

  Compares multiple systems together

Options:
  -f, --output-figure FILE        Path where write the output figure (any
                                  extension supported by matplotlib is
                                  possible).  If not provided, does not
                                  produce a figure.
  -T, --table-format [asciidoc|double_grid|double_outline|fancy_grid|fancy_outline|github|grid|heavy_grid|heavy_outline|html|jira|latex|latex_booktabs|latex_longtable|latex_raw|mediawiki|mixed_grid|mixed_outline|moinmoin|orgtbl|outline|pipe|plain|presto|pretty|psql|rounded_grid|rounded_outline|rst|simple|simple_grid|simple_outline|textile|tsv|unsafehtml|youtrack]
                                  The format to use for the comparison table
                                  [default: rst; required]
  -u, --output-table FILE         Path where write the output table. If not
                                  provided, does not write write a table to
                                  file, only to stdout.
  -t, --threshold TEXT            This number is used to separate positive and
                                  negative cases by thresholding their score.
  -v, --verbose                   Increase the verbosity level from 0 (only
                                  error messages) to 1 (warnings), 2 (log
                                  messages), 3 (debug information) by adding
                                  the --verbose option as often as desired
                                  (e.g. '-vvv' for debug).
  -?, -h, --help                  Show this message and exit.

  Examples:

      1. Compares system A and B, with their own predictions files:
  
         $ bob tb compare -vv A path/to/A/predictions.csv B path/to/B/predictions.csv

Converting predictions to JSON dataset

This script takes radiological signs predicted on a TB dataset and generate a new JSON dataset from them.

$ bob tb predtojson --help
Usage: bob tb predtojson [OPTIONS] [LABEL_PATH]...

  Convert predictions to dataset

Options:
  -f, --output-folder DIRECTORY  Path where to store the json file (created if
                                 does not exist)
  -v, --verbose                  Increase the verbosity level from 0 (only
                                 error messages) to 1 (warnings), 2 (log
                                 messages), 3 (debug information) by adding
                                 the --verbose option as often as desired
                                 (e.g. '-vvv' for debug).
  -h, -?, --help                 Show this message and exit.

  Examples:

      1. Convert predictions of radiological signs to a JSON dataset file_
  
         $ bob tb predtojson -vv train path/to/train/predictions.csv test path/to/test/predictions.csv

Aggregate multiple prediction files together

This script takes a list of prediction files and aggregate them into a single file. This is particularly useful for cross-validation.

$ bob tb aggregpred --help
Usage: bob tb aggregpred [OPTIONS] [LABEL_PATH]...

  Aggregate multiple predictions csv files into one

Options:
  -f, --output-folder DIRECTORY  Path where to store the aggregated csv file
                                 (created if necessary)
  -v, --verbose                  Increase the verbosity level from 0 (only
                                 error messages) to 1 (warnings), 2 (log
                                 messages), 3 (debug information) by adding
                                 the --verbose option as often as desired
                                 (e.g. '-vvv' for debug).
  -h, -?, --help                 Show this message and exit.

  Examples:

      1. Aggregate multiple predictions csv files into one
  
         $ bob tb aggregpred -vv path/to/train/predictions.csv path/to/test/predictions.csv