Valais/Wallis AI Workshop 3rd Edition

Real and artificial neural processing. Where: Idiap Research Institute. When: April 19 2018

logo-vwwai-no-text-web.pngThe aim of Valais/Wallis Workshops on Artificial Intelligence is to bring together engineers and researchers, from the Idiap Research Institute, HES-SO Valais-Wallis, EPFL Valais and other institutions, active in pattern analysis and machine intelligence and related applications in Security, Health or Energy.

The objective is to stimulate collaborations between research institutions in Valais.

Every year, several Valais/Wallis AI Workshops will be organised on different topics. Workshops are open to the public prior registration. Valais/Wallis AI Workshops is a joint initiative between the Idiap Research Institute and HES-SO Valais-Wallis.


Webcast & Program

click on the sign + next to the speaker name to see abstracts.

08:30 -09:00 - Coffee

The Neural Particle Filter
Abstract: The brain is able to perform remarkable computations such as extracting the voice of a person talking in a noisy crowd or tracking the position of a pedestrian crossing the road. Even though, we perform everyday those computations in a seemingly effortless way, this ongoing feature extraction task is however far from being trivial. This computational task can be formalised as a filtering problem where the aim is to infer the state of a dynamically changing hidden variable given some noisy observation. A well-known solution to this problem is the Kalman filter for linear hidden dynamics. It is however unclear how to reliably and efficiently perform inference for real-word tasks which are highly nonlinear and high dimensional. Furthermore, it is even less clear how this nonlinear filtering may be implemented in neural tissue. We recently proposed a neural network model (the Neural Particle Filter) that performs this nonlinear filtering task [1,2] and derived an online learning rule which becomes hebbian in the limit of small observation noise [1,3]. Since this filter is based on unweighted particles (unlike bootstrap particle filter which relies on weighted particles), we showed that it overcomes the known curse of dimensionality of particle filters [2].

[1] Kutschireiter, A., Surace, S. C., Sprekeler, H., & Pfister, J.-P. (2017). Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception. Nature Scientific Reports, 7(1), 8722.

[2] Surace, S. C., Kutschireiter, A., & Pfister, J.-P. (2017). How to avoid the curse of dimensionality: scalability of particle filters with and without importance weights. SIAM Review, In Press. arXiv:1703.07879

[3] Surace, S. C., & Pfister, J. P. (2016). Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes. arXiv:1611.00170

* About the speaker:

Trained as a physicist, Jean-Pascal Pfister completed his PhD in 2006 at the EPFL with Wulfram Gerstner where he developed several biological learning models. During his post-doc in Cambridge (UK) with Máté Lengyel and Peter Dayan (UCL), he focused his study on a Bayesian perspective of short-term plasticity. Then, as a group leader at the University of Bern, as well as during his sabbatical in Harvard with Haim Sompolinsky, Jean-Pascal worked on statistical learning. Now,`as a SNF Professor jointly affiliated with the Institute of Neuroinformatics (University of Zurich / ETH Zurich) and with the Department of Physiology (University of Bern) he investigates how neural networks can implement nonlinear Bayesian filtering.

10:00 -10:15 - Coffee

Learning Gleason Patterns using GANs
Abstract: Histopathology image analysis is the gold standard for diagnosis in many diseases, whole slide images with high quality are now available to researchers, but in many cases, they lack annotated data for training powerful discriminative deep learning models. The prostate cancer pathological analysis in whole slide images follows a morphological pattern system in glands and cells known as the Gleason grading system. In this talk, we will show our current work at modeling in an unsupervised manner the morphological changes from a healthy gland to a high cancer grade, using generative adversarial networks, and show their tradeoffs with more standard unsupervised features such as autoencoders.

10:25 – 10:35 + Tatjana Chavdarova (Idiap) See webcasting here: recording.png

SGAN: An Alternative Training of Generative Adversarial Networks
Abstract: Generative Adversarial Networks (GANs) represent an impressively powerful generative model, which is based on deep learning. The quality of the samples produced by this algorithm, made it applied in wide range of computer vision problems. In spite of this success, GANs gained a reputation for being notoriously difficult to train.

We consider an alternative training procedure, named SGAN, where the final pair of networks is pitched against an ensemble of adversarial networks, whose statistical independence is carefully maintained. Such an approach aims at increasing the chances of a successful unsupervised training and improving the performances of the produced generator, in terms of coverage of the targeted distribution by the modeled one. The experimental evaluation also indicates improved stability throughout convergence and faster convergence rate.

10:35 – 11:00 + Dr. Vincent Andrearczyk (HES-SO) See webcasting here: recording.png

Dynamic texture analysis with deep learning on three orthogonal planes
Abstract: Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to dense filter banks. The repetitivity property of DTs in space and time allows us to consider them as volumes and to analyze regularly sampled spatial and temporal slices. We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their predictions in a late fusion approach to obtain a competitive DT classifier trained end-to-end.

11:00 – 11:10 + Subhadeep Dey (Idiap) See webcasting here: recording.png

End-to-end approach for recognizing speakers from audio
Abstract: We will present novel ideas to successfully build end-to-end speaker recognition on deep learning. The analysed approach aims to model both speaker and phonetic information of a speech utterance through specific hidden representations of deep neural network. Performance of this new approach will be measured on a standard (RSR 2015) task and compared to conventional speaker recognition systems. Large relative improvement of about 50% in equal error rate has been observed for a fixed-phrase condition.

11:10 – 11:20 + Mara Graziani (HES) See webcasting here: recording.png

New challenges of large-scale Deep Learning: High Performance Computing for distributing computations
Abstract: Deep Learning (DL) frameworks report excellent performances in several tasks, although the demand for more computational resources frequently prevents the use of more complex models and larger datasets. Learning from massive datasets in feasible time is one of the new challenges of the European-funded project PROCESS, which proposes a user-friendly access to High Performance Computing Centres (HPC) to extend HPC from task-specific to general-purpose applications. In such context we investigate the challenges of distributing computations among thousands of cores and hundreds of GPUs, highlighting future prospects and current limitations.

11:20 – 11:45 + Dr. Mateusz Kozinski (EPFL) See webcasting here: recording.png

Learning to Segment 3D Linear Structures Using Only 2D Annotations
Abstract: We propose a loss function for training a Deep Neural Net- work (DNN) to segment volumetric data, that accommodates ground truth annotations of 2D projections of the training volumes, instead of annotations of the 3D volumes themselves. In consequence, we significantly decrease the amount of annotations needed for a given training set. We apply the proposed loss to train DNNs for segmentation of vas- cular and neural networks in microscopy images and demonstrate only a marginal accuracy loss associated to the significant reduction of the annotation effort. The lower labor cost of deploying DNNs, brought in by our method, can contribute to a wide adoption of these techniques for analysis of 3D images of linear structures.

11:45 -12:30 - lunch

12:30 - 14:00 + Business Ideas
- Business Ideas introduction + presentation Agrofly with Frédéric Hemmeler CEO and Co-founder. See webcasting here: recording.png
- ecoRobotix with Steve Tanner. See webcasting here: recording.png

Title
Abstract: Self employment as a career option. Get tips and tricks from successful startup founders. More info can be found here: Business Ideas



Partners

Idiap-logo-E.png

hes-so_valais-wallis.png

logo-epfl-valais.png

inno-venturelab-logo.png



How to find us:

 

idiap-on-google-map.png

 


Address

Centre du Parc
Rue Marconi 19
PO Box 592
CH - 1920 Martigny
Switzerland

Phone

Tel. +41 27 721 77 11
Fax +41 27 721 77 12

Geographical coordinates

Lat: 46.109362° / 46°06'33.7"
Lon: 7.084465° / 7°05'04.1"