Idiap has a new opening for a PhD positions in Visual Sensing for Human-Robot Interaction (HRI)
The research will be conducted in the context of MuMMER (MultiModal Mall Entertainment Robot, www.mummer-project.eu). The project will develop a humanoid robot (based on Aldebaran's Pepper platform) able to engage and interact autonomously and naturally with individuals or groups or people. To support this behaviour, the project consortium will develop and integrate new methods from audiovisual scene processing, social-signal processing, high-level action selection, and human-aware robot navigation.
PhD position and requirements: The research conducted at Idiap will focus on the sensing part (person detection, tracking, identification, non-verbal behavior -gaze, attention, head gestures- extraction, speaker turn detection), but with an HRI flavor, like better accounting for robot gestures, active sensing to reduce perception uncertainties, generating behaviors to convey perception uncertainties, exploiting soft priors coming from communication and dialog models, etc. Within a team of two to PhD students and one postdoc, the PhD student is expected to advance the state-of-the-art in this field, through the design of principled algorithms from machine learning, computer vision, and HRI, to address the above tasks. The work for this position will be oriented towards vision/depth processing, and the modeling of face representation for the design of head detectors/trackers and non-verbal behavior extraction methods appropriate at different depth ranges.
The research will rely on previous experience and software developed in the context of previous projects.
The ideal Ph.D student should have a master in computer science, engineering, or applied mathematics. S/he should have a good background in mathematics, statistics, and programming (C/C++, Python, scripting languages). Prior experience or background in statistical learning theory, computer vision or robotics will be a plus.
To apply for this position, click on the following link: Visual Sensing for Human-Robot Interaction