.. -*- coding: utf-8 -*- .. _bob.ip.binseg.results.baselines.vessel: ============================================== Retinal Vessel Segmentation for Retinography ============================================== .. list-table:: :header-rows: 2 * - - - :py:mod:`driu ` - :py:mod:`hed ` - :py:mod:`m2unet ` - :py:mod:`unet ` - :py:mod:`lwnet ` * - Dataset - 2nd. Annot. - 15M - 14.7M - 550k - 25.8M - 68k * - :py:mod:`drive ` - 0.788 (0.021) - `0.821 (0.014) `_ - `0.813 (0.016) `_ - `0.802 (0.014) `_ - `0.825 (0.015) `_ - 0.828 * - :py:mod:`stare ` - 0.759 (0.028) - `0.828 (0.039) `_ - `0.815 (0.047) `_ - `0.818 (0.035) `_ - `0.828 (0.050) `_ - 0.839 * - :py:mod:`chasedb1 ` - 0.768 (0.023) - `0.812 (0.018) `_ - `0.806 (0.020) `_ - `0.798 (0.018) `_ - `0.807 (0.017) `_ - 0.820 * - :py:mod:`hrf ` (1168x1648) - - `0.808 (0.038) `_ - `0.803 (0.040) `_ - `0.796 (0.048) `_ - `0.811 (0.039) `_ - 0.814 * - :py:mod:`hrf ` (2336x3296) - - `0.722 (0.073) `_ - `0.703 (0.090) `_ - `0.713 (0.143) `_ - `0.756 (0.051) `_ - 0.744 * - :py:mod:`iostar-vessel ` - - `0.825 (0.020) `_ - `0.827 (0.020) `_ - `0.820 (0.018) `_ - `0.818 (0.020) `_ - 0.832 Notes ----- * HRF models were trained using half the full resolution (1168x1648) * The following table describes recommended batch sizes for 24Gb of RAM GPU card: .. list-table:: :header-rows: 1 * - - :py:mod:`driu ` - :py:mod:`hed ` - :py:mod:`m2unet ` - :py:mod:`unet ` - :py:mod:`lwnet ` * - :py:mod:`drive ` - 8 - 8 - 16 - 4 - 4 * - :py:mod:`stare ` - 5 - 4 - 6 - 2 - 4 * - :py:mod:`chasedb1 ` - 4 - 4 - 6 - 2 - 4 * - :py:mod:`hrf ` - 1 - 1 - 1 - 1 - 4 * - :py:mod:`iostar-vessel ` - 4 - 4 - 6 - 2 - 4 Results for datasets with (768x768 resolution) .. list-table:: :header-rows: 2 * - - - :py:mod:`driu ` - :py:mod:`hed ` - :py:mod:`m2unet ` - :py:mod:`unet ` - :py:mod:`lwnet ` * - Dataset - 2nd. Annot. - 15M - 14.7M - 550k - 25.8M - 68k * - :py:mod:`drive ` - - 0.812 - 0.806 - 0.800 - 0.814 - 0.807 * - :py:mod:`stare ` - - 0.819 - 0.812 - 0.793 - 0.829 - 0.817 * - :py:mod:`chasedb1 ` - - 0.809 - 0.790 - 0.793 - 0.803 - 0.797 * - :py:mod:`hrf ` - - 0.799 - 0.774 - 0.773 - 0.804 - 0.800 * - :py:mod:`iostar-vessel ` - - 0.825 - 0.818 - 0.813 - 0.820 - 0.820 * - Combined datasets - - 0.811 - 0.798 - 0.798 - 0.813 - 0.804 Notes ----- * The following table describes recommended batch sizes for 24Gb of RAM GPU card: .. list-table:: :header-rows: 1 * - - :py:mod:`driu ` - :py:mod:`hed ` - :py:mod:`m2unet ` - :py:mod:`unet ` - :py:mod:`lwnet ` * - :py:mod:`drive ` - 8 - 8 - 8 - 4 - 8 * - :py:mod:`stare ` - 8 - 8 - 8 - 4 - 8 * - :py:mod:`chasedb1 ` - 8 - 8 - 8 - 4 - 8 * - :py:mod:`hrf ` - 8 - 8 - 8 - 4 - 8 * - :py:mod:`iostar-vessel ` - 8 - 8 - 8 - 4 - 8 Results for datasets with (1024x1024 resolution) .. list-table:: :header-rows: 2 * - - - :py:mod:`driu ` - :py:mod:`hed ` - :py:mod:`m2unet ` - :py:mod:`unet ` - :py:mod:`lwnet ` * - Dataset - 2nd. Annot. - 15M - 14.7M - 550k - 25.8M - 68k * - :py:mod:`drive ` - - 0.813 - 0.806 - 0.804 - 0.815 - 0.809 * - :py:mod:`stare ` - - 0.821 - 0.812 - 0.816 - 0.820 - 0.814 * - :py:mod:`chasedb1 ` - - 0.806 - 0.806 - 0.790 - 0.806 - 0.793 * - :py:mod:`hrf ` - - 0.805 - 0.793 - 0.786 - 0.807 - 0.805 * - :py:mod:`iostar-vessel ` - - 0.829 - 0.825 - 0.817 - 0.825 - 0.824 Notes ----- * The following table describes recommended batch sizes for 24Gb of RAM GPU card: .. list-table:: :header-rows: 1 * - - :py:mod:`driu ` - :py:mod:`hed ` - :py:mod:`m2unet ` - :py:mod:`unet ` - :py:mod:`lwnet ` * - :py:mod:`drive ` - 8 - 8 - 8 - 4 - 8 * - :py:mod:`stare ` - 8 - 8 - 8 - 4 - 8 * - :py:mod:`chasedb1 ` - 8 - 8 - 8 - 4 - 8 * - :py:mod:`hrf ` - 8 - 8 - 8 - 4 - 8 * - :py:mod:`iostar-vessel ` - 8 - 8 - 8 - 4 - 8 .. include:: ../../links.rst