Personal tools
You are here: Home

NOVICOM

Automatic Analysis of Group Conversations via VIsual Cues in NOnverbal COMmunication

EU FP7 Marie Curie - Intra-European Fellowship (IEF) project

Marie Curie fellow: Oya Aran

Scientist in Charge: Daniel Gatica Perez

Start date: 1 June 2009

Duration: 2 years

Introduction

Social interaction is a fundamental aspect of human life and is also a key research area in psychology and cognitive science. Social psychologists have been researching the dimensions of social interaction for decades and found out that social signals strongly determine human behavior. Most of these signals are consciously produced, in the form of spoken language. However, besides the spoken words, human interaction also involves nonverbal elements which are extensively and mainly unconsciously used in human communications. The non-verbal communication is conveyed as wordless messages, in parallel to the spoken words, through aural cues (voice quality, speaking style, rhythm, intonation) and also through visual cues (gestures; body language or posture; facial expression and gaze). These non-verbal signals are used to predict human behavior, mood, personality, and social relations, in a very wide range of situations. It has been shown that, in many social situations, humans can correctly interpret non-verbal signals and can estimate the behavior with high accuracy.

Computational analysis of social interaction, in particular of face-to-face group conversations is an emerging field of research in several communities such as human computer interaction, machine learning, speech and language processing, and computer vision [Pentland 05a, Gatica-Perez 06]. Close connection with other disciplines including psychology and linguistics also exist in order to understand what kind of verbal and nonverbal signals are used in diverse social situations to predict the human behavior. The ultimate aim is to develop computational systems that can automatically predict the human behavior by observing a group conversation via sensing devices such as cameras and microphones. The automatic analysis of group conversations will enable the development of tools that improve collective decision making, help to keep remote users in the loop in a teleconferencing system, and also tools for self assessment, training, and education.

We aim to develop new and principled computational methods to detect and analyze visual non-verbal cues for the automatic analysis of social interaction in small group face-to-face conversations. Specifically, we concentrate on hand gestures, head gestures and body posture. As non-verbal communications in social interactions do not only include visual cues but also the aural ones, the automatic analysis of social interactions requires the use of both cues in modeling and recognition.