Recent advancements in deepfake detection have demonstrated the unexpected effectiveness of using internal features from large vision models trained exclusively on real data. These features, which are then attack-agnostic, can be used in conjunction with simple downstream classifiers to perform detection. Notably, features extracted using pre-trained CLIP models, originally trained for image-caption alignment, have shown promise in previous studies.
This study focuses on evaluating the applicability of attack-agnostic features for MAD. Specifically:
The following DET curves illustrate the performance of various models under different evaluation settings. The scenarios include:
Baseline: FRGC dataset, all attacks seen at training, evaluated in digital domain
Baseline: FFHQ dataset, all attacks seen at training, evaluated in digital domain
New ๐: The FFHQ-Morphs is now available at the following link: https://www.idiap.ch/dataset/ffhq-morphs.
Morph generation: The code for regenerating morphs is now available at the following link: https://gitlab.idiap.ch/biometric/morphgen.
Reproducibility: The code for reproducing the MAD experiments is available at the following link: https://gitlab.idiap.ch/bob/bob.paper.ijcb2024_agnostic_features_mad.
@INPROCEEDINGS{colbois_agnostic_features_mad,
author={Colbois, Laurent and Marcel, Sรฉbastien},
booktitle={2024 IEEE International Joint Conference on Biometrics (IJCB)},
title={Evaluating the Effectiveness of Attack-Agnostic Features for Morphing Attack Detection},
year={2024},
volume={},
number={},
pages={1-9},
keywords={Training;Support vector machines;Systematics;Detectors;Feature extraction;Solids;Robustness;Data models;Data mining;Faces},
doi={10.1109/IJCB62174.2024.10744532}}
}