FaceLLM: A Multimodal Large Language Model for Face Understanding

Idiap Research Institute

Summary

Multimodal large language models (MLLMs) have shown remarkable performance in vision-language tasks. However, existing MLLMs are primarily trained on generic datasets, limiting their ability to reason on domain-specific visual cues such as those in facial images. In particular, tasks that require detailed understanding of facial structure, expression, emotion, and demographic features remain underexplored by MLLMs due to the lack of large-scale annotated face image-text datasets. In this work, we introduce FaceLLM, a multimodal large language model trained specifically for facial image understanding. To construct the training data, we propose a novel weakly supervised pipeline that uses ChatGPT with attribute-aware prompts to generate high-quality question-answer pairs based on images from the FairFace dataset. The resulting corpus, called FairFaceGPT, covers a diverse set of attributes including expression, pose, skin texture, and forensic information. Our experiments demonstrate that FaceLLM improves the performance of MLLMs on various face-centric tasks and achieves state-of-the-art performance. This work highlights the potential of synthetic supervision via language models for building domain-specialized MLLMs, and sets a precedent for trustworthy, human-centric multimodal AI systems. FairFaceGPT dataset and pretrained FaceLLM models will be publicly available soon.

Evaluation

The following table shows the comparison with MLLMs on the FaceXBench benchmark. The best performing model in each category is emboldened and the best model amongst all MLLMs is in purple.

Comparison with MLLMs

The performance of FaceLLM models (FaceLLM-1B, FaceLLM-8B, and FaceLLM-38B) are compared in the following figures for different sub-tasks, including age estimation, gender prediction, race estimation, high-resolution face recognition, low-resolution face recognition, celebrity identification, face anti-spoofing, deepfake detection, attributes prediction, facial expression recognition, headpose estimation, face localization crowd counting, face parsing, and face tools retrieval.

Performance of different versions of FaceLLM

Reproducibility: Source Code, Models, and Dataset

The source code of our experiments as well as pretrained FaceLLM models are publicly available. We also release FairFaceGPT dataset.

BibTeX


  @article{facellm2025,
    author    = {Hatef Otroshi Shahreza and S{\'e}bastien Marcel},
    title     = {FaceLLM: A Multimodal Large Language Model for Face Understanding},
    journal   = {arXiv preprint arXiv:2507.10300},
    year      = {2025}
  }