Multimodal Computational Modeling of Nonverbal Social Behavior in Face to Face Interaction

This project proposes to build computational models of social constructs that define the social behavior of individuals and groups in face to face conversations, perceived via audio, visual or mobile sensors. The aim is to automatically analyze the social behavior of individuals during their interaction through their nonverbal signals and build models to estimate several social concepts using machine learning techniques. The novelty of the proposed approach is that it investigates computational approaches that make use of the close relation between related social constructs, such as dominance and leadership, or personality and dominance, during the learning process. The assumption is that these social constructs are related, thus automatic inference for one concept can take advantage of the other. The project follows a joint learning approach that combines the individual characteristics of participants in a group, such as personality and mood, with their social position in the group, such as dominance or roles, resulting from intra-group interaction and relations, as well as the overall group structure. Another novelty of the proposal is related to the usage of social media content to learn social behavior of individuals. Unlike the limited amount of data that is used to build computational models of social behavior, the social media sites provide an excellent and a vast amount of data for natural human behavior. The project aims to transfer the knowledge that can be extracted from the audio-visual behavioral content in social media (i.e. video blogging sites, video discussion sites, video lectures sites, etc.) to small group settings.
Application Area - Human Machine Interaction, Social and Human Behavior
Idiap Research Institute
Swiss National Science Foundation
Nov 01, 2011
Feb 28, 2015