Conventional means of identification such as passwords, secret codes and personal identification numbers (PINs) can easily be compromised, shared, observed, stolen or forgotten. However, a possible alternative in determining the identities of users is to use biometrics.
Biometric person recognition refers to the process of automatically recognizing a person using distinguishing behavioral patterns (gait, signature, keyboard typing, lip movement, hand-grip) or physiological traits (face, voice, iris, fingerprint, hand geometry, electroencephalogram -- EEG, electrocardiogram -- ECG, ear shape, body odor, body salinity, vascular). Over the last decades, several of these biometric modalities have been investigated (fingerprint, iris, voice, face) and are still under consideration. More recently, novel biometric modalities have emerged (gait, EEG, vascular) mainly due to the development of sensor technologies.
Biometric person recognition offers a wide range of challenging fundamental and concrete problems in image processing, computer vision, pattern recognition and machine learning. It is thus a truly inter-disciplinary research field. More details
Face detection and recognition
Face processing (detection and recognition) is a challenging problem because faces highly vary in size, shape, color, texture and location. Their overall appearance can also be influenced by lighting conditions, facial expression, occlusion or facial features, such as beards, mustaches and glasses. Another challenging problem comes from the orientation (upright, rotated) and the pose (frontal to profile) of the face.
The goal of face detection is to determine whether or not there are any faces in the image and, if present, their location. It is the crucial first step of any application that involves face processing systems. Thus, accurate and fast human face detection is the key to a successful operation.
Face recognition has been an active research area for more than 30 years and different systems are now capable of correctly recognizing people's faces under specific environments (near frontal faces and controlled imaging conditions). However, many applications need the ability to deal with faces of varying head poses and adverse imaging conditions since most faces in the real world are not frontal and captured in uncontrolled environments.
Contact: Sébastien Marcel
A speaker recognition system uses a speech utterance to determine if it has been pronounced by a known person. This is also a difficult task depending on the quality of the capture device, the conditions and of the cooperation of the subject. Generally, the first task is to extract the relevant information (speech frames) and to filter out irrelevant information (silence, ambient noise, music or background speech) before the actual speaker recognition is triggered. Different scenarios can take place, namely text independent and text dependent.
In text independent speaker recognition, the identity models are assumed to be independent of the precise sentence pronounced by a person.
In text dependent speaker recognition, the lexical content of the sentence pronounced by a person is more important and enables better robustness against replay attacks. However, text dependent speaker recognition systems generally needs more resources than text independent ones to efficiently process this lexical information.
Multi-modal person recognition
In the past ten years, it has been shown that combining biometric systems achieves better performance than techniques using only one biometric modality. This has been shown to be true using various fusion algorithms. Fusion algorithms are methods whose goal is to merge the prediction of many algorithms (multiple biometric modules) in the hope of a better average performance than any of the individual methods. This fusion can be simple (maximum score, product or sum rules), but it is often better to train a fusion system using Machine Learning algorithms. Most of the proposed fusion techniques, often called late integration techniques, operate at the score or decision levels. Other techniques, called early integration techniques, aim to exploit the correlation between biometric modalities if any. This is the case for instance between the video and audio streams of a talking face while the person pronounce a sentence.
Over the last decades, biometric modalities such as fingerprint, iris, voice or face have been investigated extensively. However, novel biometric modalities such as gait or EEG, so called emerging biometrics, are considered.
Recently, Idiap investigated the use of EEG signals for biometric person recognition. Indeed, it has been shown in previous studies that the brain-wave pattern of every individual is unique and that the electroencephalogram (EEG) can be used for biometric recognition. EEG-based biometry is an emerging research topic and may open new research directions and applications in the future.