Digital Phenotyping of Autism Spectrum Disorders in Children

Nowadays, 1 in 59 children is diagnosed with autism spectrum disorders (ASD), which makes this condition one of the most prevalent neurodevelopmental disorders. The hereby project is grounded on the recognition that, on the one hand, early diagnosis at scale of autism in young children requires the development of tools for digital phenotyping and automated screening, through computer vision and Internet of Things sensing. On the other hand, current gold-standard approaches in autism are not intended to provide a precise quantitative estimate of ASD symptoms in children. We therefore aim to examine the potential of digital sensing to provide automated measures of the extended autism phenotype, for the purpose of stratifying autism subtypes in ways that would allow for precision medicine. Recent developments in digital sensing, big data and machine-learning have offered unforeseen opportunities for seamless sensing of body movement, social scene capture, and measure of object manipulation. Together, these tools are key for modeling social interactions and offer avenues for both improving screening and fine-grained characterization of autistic symptoms in young children. Despite considerable efforts invested to explore such automatic behavioral analysis, most studies in ASD digital phenotyping have been conducted on modest samples sizes, used mono-modal approaches, were focused on eliciting very specific behaviors by largely controlled prompts, and have suffered from technical difficulties in behavior sensing (view points, children population, image resolution for gaze). To address these limitations, we propose an interdisciplinary project combining the skills of experts in clinical research, engineering and computational social sciences in order to address these clinical, scientific, and technical challenges. It is grounded on the Geneva Autism Cohort consisting of young children with ASD and their age-matched typically developing peers, extensively assessed with gold standard standardized clinical and cognitive assessments, as well as neuroscience tools. Further, our preliminary results demonstrated that relying on a substantial dataset it is feasible to successfully train a deep neural network directly from on a global scene representation (people poses) to predict ASD with above 80% accuracy. This Sinergia proposal is set to stretch a giant leap forward, by investigating three key research directions. First, from a clinical research perspective, we will design digital tools for screening and automated profiling of autism phenotype. We will test these tools in a structured setting with well-established clinical protocol, as well as in a less structured environment (free play in day-care centers). Second, with Internet of Things (IoT) sensors, we will investigate the motor skills of very young children, through the integration of inertial and low-cost UWB indoor localization data. Additionally, we will develop a solution for the longitudinal monitoring of fine-grained motor skills development. Last but not least, our project is rooted in modern computational perception and machine learning. We will investigate novel deep learning and computer vision techniques by leveraging the availability of large behavioral and clinical annotation data. At the core of this effort, we will develop multimodal machine-learning methods and models for the analysis of motor and gaze coordination patterns which are at the core of ASD, and for ASD diagnosis and profiling with a focus towards interpretable models.
University of Geneva
Idiap Research Institute, University of Applied Sciences and Arts of Southern Switzerland (SUPSI)
Swiss National Science Foundation
Nov 01, 2021
Oct 31, 2025