Personal tools
You are here: Home / Events / SONVB workshop / List of abstracts

List of abstracts

Abstract list in pdf format.

Keynote speakers

Daniel Olguín Olguín

Title TBA

Abstract TBA

Mathias Mehl

The sounds of social life: Toward a psychological construct validation of everyday human behavior

This talk presents a novel real-world, ambulatory assessment method called the Electronically Activated Recorder or EAR. The EAR is a portable audio recorder that periodically records snippets of ambient sounds from participants’ momentary environments. In tracking moment-to-moment ambient sounds, it yields acoustic logs of people’s days as they naturally unfold. In sampling only a fraction of the time, it protects participants’ privacy. As a naturalistic observation method, it provides an observer’s account of daily life and is optimized for the assessment of audible aspects of social environments, behaviors, and interactions. This talk discusses the EAR method conceptually and methodologically and summarizes the current state of EAR research with respect to how human daily behavior relates to personality, gender, culture, and well-being. Implications for using smartphones for social sensing will be discussed.

Fabio Pianesi

Searching for Personality

It is customary for us to describe people as being more or less talkative, bold or sociable; more or less angry or vulnerable to stress; more or less planful or behaviorally controlled; more or less self-determined or influenced by external situations. We all inconspicuously exploit these descriptors in our everyday life to explain and/or predict people’s behavior, attaching them to well-known as well as to new acquaintances. In all generality, the attribution of stable personality characteristics to others and their usage to predict and explain their behavior, is a fundamental characteristics of our naïve psychology. As agents that in increasingly many and varied ways participate in, and affect, the lives of humans, computers too need to explain and predict their human parties’ behavior by, e.g., deploying some kind of naïve folk-psychology in which the understanding of people’s personality can reasonably be expected to play a role. In this work, we address some of the issues that the attempts at endowing machines with the capability of predicting people’s personality raise.

SONVB speakers

Denise Frauendorfer & Laurent Nguyen

Analysis of nonverbal behavior in the employment interview and job situation

In job interviews, recruiters try to select the most performant applicant based on the face-to-face interaction. Research has shown that in such a zero-acquaintanceship situation, nonverbal behavior of both protagonists has a remarkable impact on the interview outcome. However, little is known about possible combinations of nonverbal cues predicting the applicant's personality, hirability, and job performance as well as the validity of such cues. A dataset of 60 participants was collected, in which we simulated the life-cycle of an employee in an organization. First, we recorded applicants during the job interview with fixed sensors (video, audio and Kinect). Second, we recorded rich cellphone data of the participants while they performed their job.Two approaches were used to investigate the research questions at hand. A computational method was developed to predict hiring decision and applicant personality in job interviews. Moreover, a lens model approach was used to validate applicant's nonverbal cues. An outlook of future research within the framework of this project will be provided.

Dairazalia Sanchez-Cortes

Analyzing the Emergence of Leaders in the ELEA Corpus

The study of organizational phenomena like the emergence of leadership is becoming relevant in  social computing as gathering data from portable sensors is becoming more common in daily-natural scenarios. An emergent leader is a person who arises from a group and has her/his power from followers rather than from a high status position. My talk summarizes  our experience in designing and collecting the ELEA corpus. The aim of the collected corpus is to allow multimodal analysis of emergent leadership in small groups.  As of today, our results in predicting emergent leaders using automatically extracted audio and visual features seem promising.

Mashfiqui Rabbi

StressSense: Detecting Stress in Unconstrained Acoustic Environments using Smartphones

Stress can have long term adverse effects on individuals' physical and mental well-being. Changes in the speech production process is one of many physiological changes that happen during stress. Microphones, embedded in mobile phones and carried ubiquitously by people, provide the opportunity to continuously and non-invasively monitor stress in real-life situations. We propose StressSense for unobtrusively recognizing stress from human voice using smartphones. We investigate methods for adapting a one-size-fitsall stress model to individual speakers and scenarios. We demonstrate that the StressSense classifier can robustly identify stress across multiple individuals in diverse acoustic environments: using model adaptation StressSense achieves 81% and 76% accuracy for indoor and outdoor environments, respectively. We show that StressSense can be implemented on commodity Android phones and run in real-time. To the best of our knowledge, StressSense represents the first system to consider voice based stress detection and model adaptation in diverse real-life conversational situations using smartphones.

Invited speakers

Alvaro Marcos

Unsupervised body communicative cue extraction for conversational analysis

Nonverbal communication plays an important role in many aspects of our lives, such as in job interviews, where vis-a-vis conversations take place. We are working in a method to automatically detect body communicative cues by using video sequences of the upper body of individuals in a conversational context. We explicity address the recognition of visual activity in a seated, conversational setting from monocular video. We first detect the person hands in the sequence, then we infer the approximate 3D upper body pose to perform action recognition.

 

Dinesh Jayagopi

Mining Speaking and Looking Behavior Patterns and linking with Group Composition, Perception, and Performance

This talk would addresses the task of mining typical behavioral patterns from small group face-to-face interactions and linking them to social-psychological group variables. Towards this goal, we define group speaking and looking cues by aggregating automatically extracted cues at the individual and dyadic levels. Then, we define a bag of nonverbal patterns (Bag-of-NVPs) to discretize the group cues. The topics learnt using the Latent Dirichlet Allocation (LDA) topic model are then interpreted by studying the correlations with group variables such as group composition, group interpersonal perception, and group performance. Our results show that both group behavior cues and topics have significant correlations with (and predictive information for) all the above variables. For our study, we use interactions with unacquainted members i.e. newly formed groups.

Oya Aran

Domain adaptation approaches for personality prediction

In this presentation, I will be discussing several approaches that enable transferring the knowledge learned through the data collected from social media to small group settings. Unlike the limited amount of small group interaction data, which is mainly collected in controlled and experimental settings, the social media sites provide an excellent and a vast amount of data for natural human behavior. The aim is to build better computational models for small group settings, where the data is limited, by making use of this vast amount of data from social media. In particular, I will present results of our experiments on personality prediction in two different domains: the source domain is the web video domain, or the vlog domain, in which the people record themselves in a monologue like fashion and post on the internet. The target domain is a small group setting, where 3-4 people are interacting in a meeting. Formulating the problem as a classification task and using visual nonverbal features extracted from the video, the results show that, even a small amount of annotated data from the target domain, used together with a larger source domain data, achieves a significant increase in the accuracy of personality prediction.

Thom de Vries

Investigating multiteam systems: The case of a rail-network control center in the Netherlands

Large-scale transport networks, military operations, crisis events, and new product developments all require coordinated actions of teams of teams or “multiteam systems”. Multiteam systems include two or more component teams, each with distinct areas of core functional expertise, that work interdependently towards collective goals. As such, multiteam systems bring together a complex variety of skills, knowledge, and functions in adaptive, team-based structures and are especially fit to accomplish significant, highly complex tasks. At the same time, the dynamic and complex nature of multiteam systems often inhibits extensive empirical research on multiteam systems in real-life crisis situations. We present a research program of the University of Groningen that aims to explore the functioning of multiteam systems in field settings with the use of innovative research methods.

Guillaume Chanel

EATMI: Emotional Awareness Tools for Mediated Interaction

In collaborative face to face situations, people rely on a whole set of explicit and implicit mechanisms to adapt to their interactive partners and the communication situation. In computer-mediated collaboration (CMC), contextual non-verbal cues - such as for example, facial expressions, voice intonation, head movement, eye gazes - are missing or seriously limited. The awareness of others may be therefore impaired and this may lead to inefficient interactions. There is thus a need to develop tools able to improve emotional awareness. Automatically estimating affective states of computer users is part of the "affective computing" discipline. Affective computing has for main goal to design computer interfaces able to express, detect and react to users emotions.

These emotional adaptation strategies are thus of high interest to augment the computer-mediated communication loop and to improve emotional awareness. In this talk the current state of the project "Affective Computing and Emotion Awareness in Computer-Mediated Interaction" will be presented. This include the development of two emotion awareness tool and the highly multi-modal data collection of 30 participants interacting with or without this tool.

Kenneth Funes

Gaze estimation from RGB-D Cameras

In this work we address the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context our work make the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on different users demonstrate the great potential of our approach.

Sebastian Feese

Towards monitoring fire fighting teams with wearable sensors

We envision the use of wearable sensors to monitor firefighting teams in order to improve post incident feedback and eventually team performance. As a first step, we evaluate how well physical and speech activity can be measured with the smart phone in the context of firefighting. We present results of a feasibility study in which two firefighting teams had to extinguish a real kitchen fire and explore whether their different performance can be partially explained by the measured sensor data.

Bertolt Meyer

A social-signal processing view on leadership: Specific communication behaviors characterize considerate leaders

Combining leadership research with findings from small group research and clinical psychology, we propose that three specific observable interaction behaviors of leaders are partially responsible for the positive effects of considerate leadership on team performance: Question asking, reciprocity, and behavioral mimicry, with the latter serving as a marker for rapport and empathy. In a laboratory experiment involving 55 three-person groups who worked on a simulated personnel-selection task, we manipulated the leader's leadership style as being either considerate or inconsiderate. The number of questions asked by the leader in the subsequent team interaction was obtained through behavioral coding, and communication reciprocity and behavioral mimicry were measured through social signal processing with computer-based voice analysis and motion tracking. In partial support of the hypotheses, leaders' question asking and their communicative reciprocity fully mediated the effect of the leadership manipulation on team performance. Leaders' behavioral mimicry predicted subordinates' ratings of the leader on the individualized consideration subscale of the MLQ questionnaire, but not team performance. Results imply that leaders' verbal communication is linked to team performance, while their non-verbal communication is linked to their evaluation. As a practical implication, leaders might be able to increase team effectiveness by exhibiting certain specific interaction behaviors.

Back