Demographic Fairness in Multimodal LLMs: A Benchmark of Gender and Ethnicity Bias in Face Verification

Idiap Research Institute, Switzerland
*Equal Contribution

Summary

Multimodal Large Language Models (MLLMs) have recently been explored as face verification systems that determine whether two face images are of the same person. Unlike dedicated face recognition systems, MLLMs approach this task through visual prompting and rely on general visual and reasoning abilities. However, the demographic fairness of these models remains largely unexplored. In this paper, we present a benchmarking study that evaluates nine open-source MLLMs from six model families, ranging from 2B to 8B parameters, on the IJB-C and RFW face verification protocols across four ethnicity groups and two gender groups. We measure verification accuracy with the Equal Error Rate and True Match Rate at multiple operating points per demographic group, and we quantify demographic disparity with four FMR-based fairness metrics. Our results show that FaceLLM-8B, the only face-specialised model in our study, substantially outperforms general-purpose MLLMs on both benchmarks. The bias patterns we observe differ from those commonly reported for traditional face recognition, with different groups being most affected depending on the benchmark and the model. We also note that the most accurate models are not necessarily the fairest and that models with poor overall accuracy can appear fair simply because they produce uniformly high error rates across all demographic groups.
Pipeline overview

Pipeline overview: Face pairs from each demographic group are independently prompted through an MLLM for pairwise verification. The per-pair similarity scores are aggregated into group-level error metrics (FMR/FNMR), which are then compared across demographics to assess fairness.

Face Verification with MLLMs

To evaluate MLLMs for face verification, we provide MLLM with two face images and a text prompt. In the text prompt, we ask the model to compare the given images and return a similarity score:

prompt

We use the output of MLLM as similarity score (normalised to [0,1].) to evaluate the model for face verification.

Experimental Results

We evaluate the demographic fairness of MLLMs on two face verification benchmarks: IJB-C and RFW datasets. The following table reports reports global and per-group EER together with TMR at three fixed FMR thresholds for both benchmarks. The DET curves in Fig. 2 visualise the full operating characteristic for every model.

Verification performance on IJB-C and RFW

The following table reports four FMR-based fairness metrics evaluated at the EER threshold and at three fixed operating points (FMR = 10%, 1%, 0.1%), together with the mean decidability index:

four FMR-based fairness metrics

The following Figure shows the genuine and impostor score distributions for each demographic group. FaceLLM-8B shows the clearest separation between the two distributions on both benchmarks, which is consistent with its low EER. Ovis1.5 and Qwen2-VL-2B, on the other hand, have heavily overlapping genuine and impostor distributions, which explains their near-chance accuracy.

genuine and impostor score distributions for each demographic group

Reproducibility: Source Code

[Source Code] The source code of our experiments is publicly available: https://github.com/idiap/mllm-fairness

BibTeX


@article{mllm_fairness_2026,
  author  = {{\"U}nsal {\"O}zt{\"u}rk and Hatef Otroshi Shahreza and S{\'e}bastien Marcel},
  title   = {Demographic Fairness in Multimodal LLMs: A Benchmark of Gender and Ethnicity Bias in Face Verification},
  journal = {arXiv preprint arXiv:2603.25613},
  year    = {2026}
}