Identity-Preserving Aging and De-Aging of Faces in the StyleGAN Latent Space

Idiap Research Institute

Accepted at IJCB 2025
Sample Face Images from AgeSynth

Sample face images generated by our identity-preserving aging and de-aging approach in the StyleGAN latent space.

Summary

Face aging and de-aging with generative AI are increasingly applied in forensics, security, and media, yet most existing methods still depend on conditional Generative Adversarial Networks (GANs), Diffusion-based models, or Visual Language Models (VLMs) that rely on predefined age labels, text prompts, or extensive fine-tuning. Such conditioning leads to a higher training complexity, increases data requirements, and often leaves identity preservation untested beyond a single face recognition system. We approach face aging and de-aging through latent edits in the StyleGAN2 latent space along an age synthesis direction found by simple support vector modeling and feature selection strategies based on subspace modeling and reconstruction. In this extended work, we describe the feature selection procedure in more detail, complement our identity preservation results when synthetically aging and de-aging real subjects, evaluate identity preservation in our fully synthetic dataset showing the influence of using different face recognition backbones, and include a discussion section providing insights from our results. Finally, we release an updated toolset—including age direction, feature weights for editing, fitted age curves, and code for computing the corresponding latent step to a target age—and our generated fully synthetic dataset with evaluation code for testing identity preservation with different face recognition backbones.

Method Overview

We propose a simple and data-efficient approach for face aging and de-aging by editing the StyleGAN2 latent space along an age synthesis direction. The age direction is determined using support vector regression (SVR) trained on a small set of age-labeled latent vectors, allowing us to model the relationship between latent space and age in a continuous and controllable way. To further improve identity preservation, we employ feature selection strategies such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which help isolate the latent components most relevant to age and identity. Our method enables controlled age transformation with minimal data (1.3K images for SVR) and does not require retraining the GAN. We evaluate our approach on both real and fully synthetic datasets, using multiple face recognition backbones to assess identity retention across age modifications.

Method Block Diagram

Diagram of our identity-preserving aging and de-aging method in the StyleGAN latent space.


Comparison with Previous Methods

We compare age synthesis using our method and SAM, with their respective face recognition (FR) scores for each aged image when compared to the original. Qualitatively, our method produces more organic changes while aging, such as hair color, skin tone, and details around the eye region. Quantitatively, our method shows better and more consistent performance in larger age gaps (for example, a +30 year gap FR score of -0.66 compared to 1.06 from SAM), while using only 1.3K training images for the SVR and a single GAN without re-training. In contrast, SAM requires 70K images for training two GAN pipelines.

Comparison with previous methods

Comparison of recognition performance and age transformation quality with SAM and our method.



Reproducibility: Source Code and Data

The source code and synthetic dataset are available at the following links:

BibTeX


@article{luevano2025identity,
  title={Identity-Preserving Aging and De-Aging of Faces in the StyleGAN Latent Space},
  author={Luevano, Luis S. and Korshunov, Pavel and Marcel, S{\'e}bastien},
  journal={International Joint Conference on Biometrics (IJCB) 2025},
  year={2025}
}