2018 Idiap Awards

Idiap PhD Student Research Award

Homepage: Tiago de Freitas Pereira

Google Scholar: Tiago de Freitas Pereira

Supervisor: Sébastien Marcel

Ph.D. start/end: 2014-2019 (first trimester)

Arguments: Tiago is 4th year PhD student and will graduate soon. He is a member of the Biometrics Security and Privacy group.

In his work he is investigating the problem of heterogenous face recognition (HFR) which is the ability of matching faces from different image domains. Use cases encompass the matching between Visual Light images (VIS) with Near infra-red images (NIR), Visual Light images (VIS) with Thermograms or Depth maps. This match can occur even in situations where no real face exists, such as matching using Forensic Sketches. The key difficulty in the comparison of faces in heterogeneous conditions is that images from the same subject may differ in appearance due to changes in image domain. Contributions from Tiago are four-fold. First, he analysed the applicability of crafted features used in face recognition in the HFR task. Second, still working with crafted features, he proposed that the variability between two image domains can be suppressed with a linear shift in the Gaussian Mixture Model (GMM) mean subspace. That encompasses inter-session variability (ISV) modeling, joint factor analysis (JFA) and total variability (TV) modeling. Third, he proposed that high level features of Deep Convolutional Neural Networks trained on Visual Light images are potentially domain independent and can be used to encode faces sensed in different image domains. Fourth, he conducted large-scale experiments on several HFR databases, covering various image domains showing competitive performances. Moreover, the implementation of all the proposed techniques are integrated into the collaborative open source software library Bob that enforces fair evaluations and encourages reproducible research. It is anticipated that the thesis will also be fully reproducible and provided as a single open source and documented repository. publications (especially this year) Tiago published this year in the Core A Journal IEEE Transactions on Information Forensics and Security (TIFS) with an impact factor of 5.8.

Impact: He published in major IEEE and IAPR conferences in biometrics as well as in ICML and CVPR workshops. Tiago has an h-index of 10, with 2 most cited papers respectively 115 and 107 cites as first author. Tiago is an outstanding team player. First he is an active contributor of Bob, he actually helps to improve the quality of the base code as well as its documentation and provides support to users at Idiap and outside Idiap. He participates in many discussions at the group level on deep learning and shares his experience with TensorFlow. Tiago has also very good communication and pedagogic skills and helped for teaching activities at EPFL and UNIL on the preparation of lab material. Tiago was mainly involved (and funded) by the SNF HFACE project but his work is used in many other projects in mobile face recognition (projects SWAN and FARGO) or anti-spoofing (IARPA BATL).

Reproducible Research: His papers are fully reproducible and he is actually maintaining an online face recognition leaderboard to benchmark various algorithms. Some of papers are even reproducible using the Idiap BEAT platform. He also implemented in Bob a wrapper for TensorFlow and Caffe used by all the team.

Internships: Tiago did an internship in 2017 at Samsung in the US. I received very positive comments about his stay: "Tiago was an amazing co-worker, and helped us immensely in moving a critical project towards completion. It was truly a pleasure to have him here, and he's a credit to your lab and the to the work that you do in every way."

Participation to group and Idiap life: Tiago was part of the EPFL EDEE Student Committee for IDIAP/LIDIAP. He is also a well-known and friendly colleague.


Idiap PhD Student Paper Award

Homepage: Yu Yu

Google Scholar: Yu Yu

Paper: HeadFusion: 360 degree Head Pose tracking combining 3D Morphable Model and 3D Reconstruction Y. Yu, K. Funes and J.-M. Odobez IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), Vol 40(1), Nov. 2018.

Arguments: This paper published in PAMI focuses on 3D head pose from vision and depth sensors (like Kinect). Very robust tracking from any view point has been achieved thanks to a clever design combining 3D Morphable models (3DMM) automatically fitted to individual faces (which are not used for full head given the difficulty to build statistical models of heads to the contrary of faces), online full head shape reconstruction, visual tracking to address natural head dynamics with fast accelerations, and symmetry regularization to handle common situations where face is dominantly seen from one side only. The robustness and accuracy has been demonstrated on several benchmarks and favorably compared to state-of-the art. The paper also involved substantial manual and tedious annotations of the Ubimpressed dataset, leading to the creation and public release of the UBIpose dataset.

From a practical view point, the method removes constraints on sensor placement for facial behavior analysis and allows a more systematic analysis of large amounts of recordings. As one example, in the Ubimpressed project, it was systematically applied on the 360 8min recordings of the Ubimpressed dataset (involving Vatel students behaving without any constraints in a reception desk scenario), and showed around 10 failure cases, something that was by far not possible to achieve using only traditionnal 3DMM tracking. In brief, accurate 3D head tracking becomes a commidity.