"Deep neural networks remain for the most part black boxes"

Artificial deep neural networks are a powerful tool, able to extract information from large datasets and, using this acquired knowledge, make accurate predictions on previously unseen data. Due to the very large number of parameters required, they are also particularly difficult to understand.

Applied in a wide variety of domains ranging from genomics to autonomous driving, from speech recognition to gaming, neural network-based solutions require a validation, or at least some explanation, of how the system makes its decisions. This is especially true in the medical domain where such decisions can contribute to the survival or death of a patient. “Unfortunately, the very large number of parameters required by deep neural networks is extremely challenging to cope with for explanation methods, and these networks remain for the most part black boxes. This demonstrates the real need for accurate explanation methods able to scale with this large quantity of parameters and to provide useful information to a potential user,” explains Prof. Pena Carlos Andrés from the HEIG-VD.
The professor was the keynote speaker of the 5th Valais/Wallis AI Workshop held at Idiap. If you missed it, you can watch his talk “Rule and knowledge extraction from deep neural networks” below:

WEBCAST

Find hereafter the talk of the Keynote speaker Prof. Pena Carlos Andrés from HEIG-VD.

To watch all the workshop's talks, please click on the links below.

  1. Keynote speech: Prof. Pena Carlos Andrés, HEIG-VD Methods for Rule and Knowledge Extraction from Deep Neural Networks - Q&A
  2. Hannah Muckenhirn, Idiap Research Institute Visualizing and understanding raw speech modeling with convolutional neural networks - Q&A
  3. Mara Graziani, HES-SO Valais-Wallis Concept Measures to Explain Deep Learning Predictions in Medical Imaging
  4. Suraj Srinivas, Idiap Research Institute What do neural network saliency maps encode?
  5. Dr Vincent Andrearczyk, HES-SO Valais-Wallis Transparency of rotation-equivariant CNNs via local geometric priors - Q&A
  6. Dr Sylvain Calinon, Idiap Research Institute Interpretable models of robot motion learned from few demonstrations - Q&A
  7. Xavier Ouvrard, University of Geneva / CERN The HyperBagGraph DataEdron: An Enriched Browsing Experience of Scientific Publication Databa
  8. Seyed Moosavi from Signal Processing Laboratory 4 (LTS4), EPFL Improving robustness to build more interpretable classifiers - Q&A
  9. Sooho Kim from UniGe Interpretation of End-to-end one Dimension Convolutional Neural Network for Fault Diagnosis on a Planetary Gearbox