odobez
 

Research

I am leading the Perception and Activity Understanding group at the Idiap Research Institue. My main research interests are on human activities analysis from multi-modal data. This entails the investigation of fundamental tasks like the detection and tracking of people, the estimation of their pose or the detection of non-verbal behaviors, and the temporal interpretation of this information in forms of gestures, activities, behavior or social relationships. These tasks are addressed through the design of principled algorithms extending models from computer vision, multimodal signal processing, and machine learning, in particular probabilistic graphical models and deep learning techniques. Surveillance, traffic and human behavior analysis,

Some recent publications (full list)

Improving Few-Shot User-Specific Gaze Adaptation via Gaze Redirection Synthesis
Y. Yu, G. Liu and J.-M. Odobez
Int Conf. on Vision and Pattern Recognition (CVPR), Long Beach, June 2019.

A Deep Learning Approach for Robust Head Pose Independent Eye Movements Recognition from Videos
R. Siegfried, Y. Yu and J.-M. Odobez
ACM Symposium on Eye Tracking Research & Applications (ETRA), Denver, June 2019.

Adaptation of Multiple Sound Source Localization Neural Networks with Weak Supervision and Domain-Adversarial Training
W. He, P. Motlicek and J.-M. Odobez
Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), 2019.

HeadFusion: 360 degree Head Pose tracking combining 3D Morphable Model and 3D Reconstruction
Y. Yu, K. Funes and J.-M. Odobez
IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), Vol 40(1), pp 2653-2667, Nov. 2018.

Joint Localization and Classification of Multiple Sound Sources Using a Multi-task Neural Network
W. He, P. Motlicek and J.-M. Odobez
Interspeech, Hyderabad, 2018.

Robust and Discriminative Speaker Embedding via Intra-Class Distance Variance Regularization
N. Le and J.-M. Odobez
Interspeech, Hyderabad, 2018.

Real-time Convolutional Networks for Depth-based Human Pose Estimation
A. Martinez, M. Villamizar, O. Canévet and J.-M. Odobez
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018.

How to Tell Ancient Signs Apart? Recognizing and Visualizing Maya Glyphs with CNNs
G. Can, J.-M. Odobez and D. Gatica
ACM Journal on Computing and Cultural Heritage (JOCCH), Vol 11(4), 2018.

Towards the Use of Social Interaction Conventions as Prior for Gaze Model Adaptation
R. Siegfried, Y. Yu and J.-M. Odobez
in 19th ACM International Conference on Multimodal Interaction (ICMI), Glasgow, Nov. 2017.

Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition
D. Wu, L. Pigou, P.-J. Kindermans, N. Le, L. Shao, J. Dambre and J.-M. Odobez
IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), Vol. 38(8), pp 1583--1597 2016.

Combining dynamic head pose–gaze mapping with the robot conversational state for attention recognition in human–robot interactions
S.Sheikhi and J.-M. Odobez
Pattern Recognition Letters, Vol. 66, pp 81-90, Nov. 2015.

Exploiting Long-Term Connectivity and Visual Motion in CRF-based Multi-Person Tracking
A. Heili, A. Lopez-Menez and J.-M. Odobez
IEEE Transactions on Image Processing, 23(7):3040-3056, 2014.

Temporal Analysis of Motif Mixtures using Dirichlet Processes
R. Emonet, J. Varadarajan and J.-M. Odobez
IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 36(1):140-156, January 2014.