ManiGaze

dataset created to evaluate gaze estimation from remote RGB and RGB-D (standard vision and depth) sensors in Human-Robot Interaction (HRI) settings, and more specifically during object manipulation tasks

Get Data


Description

Current systems for gaze estimation are usually trained and evaluated on datasets with relatively ideal near frontal head poses and visual targets in front of the user. While this make it useful for model design and method comparison, their performance in other realistic sensing conditions and setups is relatively unknown. Developing datasets for such situations is thus needed, both to evaluate robustness of above methods, measure their performance, understand their limitations, and trigger new research to push further the state of the art of gaze tracking.

The ManiGaze dataset was designed with these goals in mind.  More specifically, it was created to evaluate gaze estimation from remote RGB and RGB-D (standard vision and depth) sensors in Human-Robot Interaction (HRI) settings, and more specifically during object manipulation tasks. The recording methodology was designed to let the user behave freely and encourage a natural interaction with the robot, as well as to automatically collect gaze targets, since a-posteriori annotation is almost impossible for gaze. The dataset involves 17 person who performed four different tasks in four sessions:
•    Marker on the table Targets (MT) session. The robot asks the user to look or point at markers located on a table placed between the robot and himself.
•     End-effector Targets (ET) session. The robot asks the user to look at its end-effector as it moves them in the space between them.
•    Object Manipulation (OM) session. The robot asks the user to perform a sequence of pick-and-place actions using different objects.
•    Set the Table (ST) session. The user is asked to show and explain to the robot how to set a table, with plate, knife and fork, spoon, and glass.

The gaze ground truth was automatically recorded for the two first session, providing a convenient benchmark to evaluate gaze estimation methods. The two last sessions provide additional material for further research (e.g. eye-hand coordination, movement analysis, …).

Reference

If you use this dataset, please cite the following publication:

R. Siegfried, B. Aminian and J.-M. Odobez.
ManiGaze: a Dataset for Evaluating Remote Gaze Estimator in Object Manipulation Situations.
In Symposium on Eye Tracking Research and Applications, June, 2020.


Licence : non-commercial

Related consent form : Rosalis Gaze