SBA 2011
IEEE International Workshop on Social Behavior Analysis 2011

Santa Barbara, CA, 21 March 2011
in conjunction with FG 2011
Workshop program is announced!

Important dates
Call for Papers
Paper Submission
Workshop Co-chairs
Program Commitee

Workshop Program
Invited Talks

This workshop aims to bring together two related areas of research: social behavior analysis and face and gesture recognition, gathering researchers from both fields and providing them with an environment to present, discuss, and fill some of the existing gaps between the two fields. Most current works on social behavior analysis, in fact, more or less ignore the possibilities and resources provided by the recent advancements in automatic visual information processing. At the same time, most of the works on face and gesture recognition focus on clearly defined gestures/acts in mainly controlled settings, for applications like biometrics or human-computer interaction, are not sufficiently robust to deal with natural environments, or lack basic understanding of the social domain and its relevant dimensions.

Workshop Program

Click to download the pdf version

08:50 - 09:00 Opening remarks
09:00 - 10:00 Invited talk: Javier R. Movellan, University of California San Diego
                  Optimal Control Approaches to the Analysis and Synthesis of Social Behavior
10:00 - 10:30 Coffee break
10:30 - 12:00 Paper presentations
10:30 - 10:55 Discovering Social Interactions in Real Work Environments
                  Chih-Wei Chen, Rodrigo Cilla Ugarte, Chen Wu, Hamid Aghajan
10:55 - 11:20 Visualisation and Prediction of Conversation Interest through Mined Social Signals
                  Dumebi Okwechime, Eng-Jon Ong, Andrew Gilbert, Richard Bowden
11:20 - 11:45 Predicting Dominance Judgments Automatically: A Machine Learning Approach
                  Mario Rojas, David Masip, Jordi Vitria
11:45 - 12:00 Discussion
12:00 - 14:00 Lunch break
14:00 - 15:00 Invited talk: James J. Blascovich, University of California Santa Barbara
                  Social influence in virtual reality
15:00 - 15:30 Coffee break
15:30 - 16:20 Paper presentations: Works in progress
15:30 - 15:55 Building up child-robot relationship for therapeutic purposes.
                  Marta Díaz, Neus Nuño, Joan Sàez, Diego Pardo, Cecilio Angulo
15:55 - 16:20 Extraction of relations between behaviors by lecturer and students in lectures
                  Eiji Watanabe, Takashi Ozeki, Takeshi Kohama
16:20 - 17:00 Final Discussion
17:00 - 17:10 Conclusions and closing of the workshop

Invited Talks

  • Javier R. Movellan, University of California San Diego
  • 21 March 2010, 09:00 - 10:00
    Optimal Control Approaches to the Analysis and Synthesis of Social Behavior
    Abstract: It is remarkable how little the behavioral and cognitive sciences have contributed to the understanding and synthesis of intelligent behavior in robots. I propose that a key for progress is the rigorous computational analysis of the problems that organisms solve when operating in the world. I will illustrate how this research agenda may proceed using the tools stochastic optimal control. These tools have been traditionally applied for engineering applications: maintaining a motor's velocity under variable loads, regulating a room's temperature, and making smart weapons. I will show how the same approach can be used to understand social development in infants and to develop sociable robots. The long term goal of the talk is to illustrate how stochastic optimal control may provide a mathematical foundation for an emerging area of computer science and engineering that focuses on the computational understanding of human behavior, and on its synthesis in robots.

  • James J. Blascovich, University of California Santa Barbara
  • 21 March 2010, 14:00 - 15:00
    Social influence in virtual reality
    Abstract: Arguably, the notion of virtual reality is as old as humanity itself. Humans seem particularly predisposed to psychology travel between physical and virtual reality and have invented many exogenous ways to do so based on media technology. The latest advance in media appears to be immersive VR technology. In terms of social interaction and social influence between avatars as well as between avatars and agents, it is necessary to identify a viable structural model of social influence within virtual environments. The model of social influence within virtual environments is delineated and supportive research described.

    Important Dates

    • Paper submission (extended)         Dec 20, 2010  (mon)
    • Notification to the authors           Jan 13, 2011   (thu)
    • Receipt of camera ready copy      Jan 27, 2011   (wed)

    Call for papers

    There is a strong interest in fields like computer vision, audio processing, multimedia, HCI, and pervasive computing, in designing computational models of human interaction in realistic social settings. Such interest is boosted by the increasing capacity to acquire behavioral data with cameras, microphones and other fixed and mobile sensors. Unlike the traditional HCI view, which emphasizes communication between a person and a computer, the emphasis of an emerging body of research has been shifting towards communicative social behavior in natural situations, with examples such as informal conversational settings, general workplace environments, interviews, and meeting scenarios.

    The analysis of visual behavioral information in social contexts, including hand gestures, head gestures, body motion, and facial expression, is relatively a recent research area. Despite the progress in computer vision to analyze structured gestures (e.g. hand gesture recognition, sign language recognition, gait recognition, etc.), the use of more accurate models of visual nonverbal communication has been largely unexplored. The main challenges are the lack of clearly defined gestures for the visual nonverbal cues and also the lack of robustness and scalability of existing systems to the requirements of realistic scenarios as opposed to controlled laboratory settings. Contrary to the natural conversational environments in which social interaction occurs, many of the existing algorithms in face and gesture analysis require controlled environments.

    The workshop will gather, discuss, and disseminate unpublished work on computational models and systems for the analysis of social behavior. Given the scope of Automatic Face and Gesture Recognition conference, we would like to focus on automatic techniques for visual analysis of human communication and on the applications that are built on top of it. We welcome contributions that present robust techniques for the analysis of gestures and facial expressions in natural conversational environments to model social behavior in everyday life and reason about them. We also strongly encourage the participation of colleagues from behavioral sciences: studies of nonverbal behavior and social interaction provide highly valuable information, concepts, and frameworks to guide automatic analysis, while efforts in automatic analysis of social behavior provide new tools, data, and insights to behavioral scientists interested in nonverbal behavior and social interaction.



    We invite contributions that address the following (non-exhaustive) list of topics:

    Social behavior analysis

    • Analysis and recognition of visual social cues and others:
      • Visual nonverbal cues (body postures, hand gestures, head gestures, actions ...)
      • Multimodal affect recognition
      • Nonverbal cues from other sensors
    • Multimodal computational models for the analysis, estimation, and prediction of social behavior aspects and dimensions (interest level, dominance, rapport, deception...) and of individual properties affecting it (e.g., personality traits, preferences...)
    • Analysis of conversational dynamics
    • Multimodal data corpora for social behavior analysis

    Systems and devices for capturing social behavior

    • Smart camera/microphone systems
    • Novel sensor technologies
    • Wearable devices
    • Cell phones

    Socially aware systems and applications

    • Computers and robots in the human interaction loop
    • Individual and group self-awareness
    • Educational applications
    • Workplace applications
    • Healthcare applications
    • Game applications
    • Art & creative applications


    Paper Submission

    We accept only previously unpublished works in six-page, two-column paper format. Papers will be evaluated in terms of originality, relevance to the workshop, technical correctness, and clarity of presentation.

    The manuscripts should follow the IEEE specification. You can use the following templates for Word and LaTeX.  General info and the templates can also be found on the IEEE Manuscript Templates for Conference Proceedings page.

    The submission and review process is handled on-line via Microsoft CMT system.

    Final Manuscript Submission Guidelines

  • All submissions must be verified using IEEE PDF eXpress to ensure compatibility. Step-by-step instructions on how to use IEEE eXpress can be found here.
  • All submissions must be accompanied with an IEEE Copyright transfer form available in the SBA submission site. Please use your account name and password you used to submit your paper for reviewing.

    Click to access SBA 2011 camera ready paper submission site.


    Workshop Co-Chairs


    Program Committee

    • Hamid Aghajan, University of Stanford, US
    • Lale Akarun, Bogazici University, Turkey
    • Alice Caplier, INPG-GIPSA lab, France
    • Ginevra Castellano, Queen Mary University of London, UK
    • Tanzeem Choudhury, Dartmouth College, US
    • Marco Cristani, University of Verona, Italy
    • Wen Dong, MIT, US
    • Hazim Ekenel, Universitat Karlsruhe, Germany
    • Emile Hendriks, Delft University of Technology, Netherlands
    • Dirk Heylen, University of Twente, Netherlands
    • Hayley Hung, University of Amsterdam, Netherlands
    • Irene Kotsia, Queen Mary University of London, UK
    • Bruno Lepri, University Of Trento, Italy
    • Stephane Marchand-Maillet, University of Geneva, Switzerland
    • Jean-Marc Odobez, Idiap Research Institute, Switzerland
    • Kazuhiro Otsuka, NTT, Japan
    • Konstantinos Moustakas, ITI/CERTH, Greece
    • Alex Pentland, MIT, US
    • Bogdan Raducanu, Computer Vision Center, Spain
    • Albert Ali Salah, University of Amsterdam, Netherlands
    • Björn Schuller, Technische Universität München, Germany
    • Nicu Sebe, University of Trento, Italy
    • Mathew Turk, University of California, Santa Barbara, US
    • Alessandro Vinciarelli, University of Glasgow, Scotland
    • Jordi Vitria, University of Barcelona, Spain