Situated Vision to Perceive Object Shape and Affordances

The objective of this project is to provide models and methods to detect, recognize, and categorize the 3D shape of everyday objects and their affordances in homes. To tackle these challenges, we propose the Situated Vision paradigm and develop 3D visual perception capabilities from the view of a robot. The Situated Vision approach is inspired by recent work in cognitive science and neuroscience: it fuses qualitative and quantitative cues to extract and group 3D shape elements and relate them to affordance categories. Cognitive mechanisms such as situation-based visual attention and task-oriented visual search focus the processing on relevant scene parts. Perception integrates quantitative and qualitative shape information from multiple 2D and 3D measurements. The analysis of the shapes is used to find instances of semantic 3D concepts, such as �puttable surface�, that can be used to find semantic entities and to learn affordance categories. To show the generality of the proposed approach, the system will be tested in three typical home scenarios with varying complexity. Four renowned research teams combine their experience to show that the combination of attention (Uni Bonn), categorization (RWTH Aachen), shape perception (TU Wien) and learning (IDIAP) will bring about a big step forward in cognitive robotics.
Application Area - Home automation, Perceptive and Cognitive Systems
University of Bonn
Idiap Research Institute, RWTH Aachen, Technische Universitaet Wien
Swiss National Science Foundation
Nov 01, 2011
Nov 30, 2015