Synthetic Face Datasets Generation via Latent Space Exploration from Brownian Identity Diffusion

1Idiap Research Institute, 2EPFL, 3UNIL

ICML 2025
sample reconstructed face image

Example of synthetic faces generated using StyleGAN2. The three rows are three different classes generated with the Langevin algorithm while the columns show intra-class variations generated using the Dispersion algorithm.

Summary

Face Recognition (FR) models are trained on large-scale datasets, which have privacy and ethical concerns. Lately, the use of synthetic data to complement or replace genuine data for the training of FR models has been proposed. While promising results have been obtained, it still remains unclear if generative models can yield diverse enough data for such tasks. In this work, we introduce several new methods, inspired by the physical motion of soft particles subjected to stochastic Brownian forces, allowing us to sample identities distributions in a latent space under various constraints. With this in hands, we generate several face datasets and benchmark them by training FR models, showing that data generated with our method exceeds the performance of previously GAN-based datasets and achieves competitive performance with state-of-the-art diffusion-based synthetic datasets. We also show that this method can be used to mitigate leakage from the generator's training set and explore the ability of generative models to generate data beyond it.

Proposed method for inter-class sampling

To sample a distribution of synthetic identities we propose a new method, we call Langevin, that iteratively improves a set of latent vectors so that the resulting identities are optimally distributed. We first choose a Generative Adversarial Network (GAN) and a reference off-the-shelf Face Recognition (FR) network. After randomly sampling a collection of latent vectors, we generate their image representation and extract their face embeddings. We then introduce two quadratic loss functions, the first one, inspired by granular mechanics, repulse embeddings up to a certain arbitrary threshold while the second pulls latent vectors towards the generator's average latent vector. The effect of this algorithm is to iteratively increase the inter-class pairwise embedding distance while maintaining a compact distribution in the latent space to keep latent vectors that yield the best quality pictures.

Langevin algorithm

Proposed methods for intra-class sampling

To generate intra-class variations, i.e. to generate several samples of a given synthetic identity, we propose a second algorithm, called Dispersion. It works similarly to Langevin, but the granular repulsion loss function now acts on the latent space. Another quadratic loss function is added in embedding space to keep the embeddings of the variations as close as possible to the embedding of the reference identity (created with Langevin). We further enhance this algorithm by changing the initialization procedure, adding a random linear combination of Covariate vectors before the first iteration. These Covariate vectors are obtained by fitting a linear Support Vector Machine (SVM) on latent space projection of the MultiPIE dataset. We call the resulting algorithm DisCo and show that, in combination with Langevin, it yields synthetic datasets of excellent quality.

Dispersion algorithm

Evaluation

For evaluation, we train FR models from scratch using the synthetic datasets created with the above-mentioned algorithms. The figure below shows the ROC curve of our best performing dataset compared to other existing synthetic datasets as well as common genuine datasets.

ROCs curves

Reproducibility: Source Code and Data

The source code of our experiments as well as part of the generated data are available at the following links. For data volume reasons only part of the generated data are published, we can share additional data for interested researchers upon reasonable request.

  • [GitHub] Source code for generating synthetic datasets
  • [Idiap GitLab] Source code for generating synthetic datasets
  • [Datasets + Checkpoints] Selection of synthetic datasets, checkpoints of models trained on the latter and additional data
  • [Checkpoints] Pre-trained face recognition models trained with our synthetic datasets (available on HuggingFace 🤗)

BibTeX


  @article{geissbuhler2024synthetic,
    title={Synthetic Face Datasets Generation via Latent Space Exploration from Brownian Identity Diffusion},
    author={Geissb{\"u}hler, David and Shahreza, Hatef Otroshi and Marcel, S{\'e}bastien},
    journal={arXiv preprint arXiv:2405.00228},
    year={2024}
  }