Idiap on LinkedIn Idiap youtube channel Idiap on Twitter Idiap on Facebook
Personal tools
You are here: Home Research News Idiap researchers involved in the organisation of the next International Conference on Multimodal Interaction (ICMI2011).

Idiap researchers involved in the organisation of the next International Conference on Multimodal Interaction (ICMI2011).

— filed under:

Five of our researchers (Herve Bourlard (General Chairs, Advisory Board), Daniel Gatica-Perez (Program Chairs), Jean-Marc Odobez (Area Chairs), Alessandro Vinciarelli (Area Chairs), Andrei Popescu Belis (Advisory Board)) are involved in the organisation of the next International Conference on Multimodal Interaction (ICMI2011) which will take place, November 14-18, 2011, in Alicante, Spain, http://www.acm.org/icmi/2011.

This year, the International Conference on Multimodal Interfaces (ICMI) and the Workshop on Machine Learning for Multimodal Interaction (MLMI) are combined for form the new ICMI, which continues to be the premium international forum where multimodal signal processing and multimedia human-computer interaction are presented and discussed. The conference will focus on theoretical and empirical foundations, varied component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development. ICMI 2011 will feature a single-track main conference which includes:
* keynote speakers
* technical full and short papers (including oral and poster presentations)
* special sessions
* demonstrations
* exhibits and doctoral spotlight papers
The main conference will be held November 14-16, 2011 and followed by a 2-day workshop.

TOPICS OF INTEREST include but are not limited to:
* Multimodal and multimedia interactive processing: multimodal fusion, multimodal output generation, multimodal interactive discourse and dialogue modeling, machine learning methods for multimodal interaction.
* Multimodal input and output interfaces: gaze and vision-based interfaces, speech and conversational interfaces, pen-based and haptic interfaces, virtual/augmented reality interfaces, biometric interfaces, adaptive multimodal interfaces, natural user interfaces, authoring techniques, architectures.
* Multimodal and interactive applications: Mobile and ubiquitous interfaces, meeting analysis and meeting spaces, interfaces to media content and entertainment, human-robot interfaces and interaction, audio/speech and vision interfaces for gaming, multimodal interaction issues in telepresence, vehicular applications and navigational aids, interfaces for intelligent environments, universal access and assistive computing, multimodal indexing, structuring and summarization.
* Human interaction analysis and modeling: modeling and analysis of multimodal human-human communication, audio-visual perception of human interaction, analysis and modeling of verbal and nonverbal interaction, cognitive modeling.
* Multimodal and interactive data, evaluation, and standards: evaluation techniques and methodologies, annotation and browsing of multimodal and interactive data, standards for multimodal interactive interfaces.
* Core enabling technologies: pattern recognition, machine learning, computer vision, speech recognition, gesture recognition.

Sep 18, 2015
Document Actions