FairFaceGPT
Description
Multimodal large language models (MLLMs) have shown remarkable performance in vision-language tasks. However, existing MLLMs are primarily trained on generic datasets, limiting their ability to reason on domain-specific visual cues such as those in facial images. In particular, tasks that require detailed understanding of facial structure, expression, emotion, and demographic features remain underexplored by MLLMs due to the lack of large-scale annotated face image-text datasets. We propose a novel weakly supervised pipeline that uses ChatGPT with attribute-aware prompts to generate high-quality question-answer pairs based on images from the FairFace dataset. The resulting corpus, called FairFaceGPT, covers a diverse set of attributes including expression, pose, skin texture, and forensic information. We use the FairFaceGPT to train FaceLLM, a multimodal large language model trained specifically for facial image understanding. Our experiments demonstrate that FaceLLM improves the performance of MLLMs on various face-centric tasks and achieves state-of-the-art performance. This work highlights the potential of synthetic supervision via language models for building domain-specialized MLLMs, and sets a precedent for trustworthy, human-centric multimodal AI systems.
Project page: https://www.idiap.ch/paper/facellm
Reference
If you use this dataset, please cite the following publication:
@article{facellm2025,
title={FaceLLM: A Multimodal Large Language Model for Face Understanding},
author={Hatef Otroshi Shahreza and S{\'e}bastien Marcel},
journal={arXiv preprint arXiv:2507.10300},
year={2025}
}