Automatic Detection of Leadership from Voice and Body

The goal of the here submitted proposal is to finance a first year of research as a concrete first step towards the creation of the Center for Leadership and New Technologies (Unil, Idiap/EPFL, IMD). The Center that we aim to create long term will include AI and virtual reality among other technologies in relation to leadership. The Center will develop tools for assessing and developing leadership, conduct research with respect to new technologies related to leadership, as well as showcase our developments and empirical results for the corporate world (e.g., writing white papers, organizing symposia and conferences). Ideally, firms would turn to the Center for advice, training, and thought leadership on the topic of new technologies and leadership. IMD will be crucial in creating the link with companies and will be able to use the new technologies for their teaching and training. A first concrete project for which we ask for seed funding from the Trans4 consortium concerns the development of a collection of software modules that will be able to automatically detect leadership skills from videotaped speeches using voice and body language information. The algorithms developed will be able to automatically detect perceived leadership based on voice and video samples. We will train an algorithm to infer leadership (e.g., trustworthiness, competence as a strategic leader, competence as a transformational leader etc.) automatically based on vocal cues and body language automatically detected by the machine. We will train the algorithms with ground truth data that we will collect from a panel of evaluators (e.g., MTurk workers) on either selfpresentation videos (e.g., video CVs on YouTube) or on public speaking videos (e.g., TED Talks). Given that the quality of the algorithm depends on the quality of the training data (i.e., ground truth), we will put extra care and effort in producing this training data. The so developed software modules can then be used for leadership skill assessment and for leadership skill training and development. It can be seen as a stand‐alone outcome but at the same time it can be incorporated to the Charismometer algorithm that John and Philip have already developed and it can be added to work Daniel and Marianne have been doing in the past (on automatic extraction of nonverbal behavior from video). Basing the new development on existing work ensures that we do not start from scratch and that we can achieve the goal within one year of funding. The seed money project is thus at the same time a continuation of existing work and an important extension of it.
University of Lausanne
Ecole Polytechnique Federale de Lausanne, Idiap Research Institute, IMD Switzerland
University of Lausanne
Jun 01, 2020
May 31, 2021