Ridge Scale Aligned Diffusion for Identity Preserving and Style Controllable Fingerprint Synthesis

Published: 09 Apr 2026, Last Modified: 09 Apr 2026CVPR 2026 Biometrics Workshop OralEveryoneRevisionsCC BY 4.0
Keywords: Fingerprint recognition, Fingerprint synthesis Data augmentation, Privacy-preserving biometrics, Diffusion models Stable Diffusion ControlNet IP-Adapter, Ridge structure preservation
TL;DR: Privacy-motivated synthesis using virtual identities sampled from a DDPM prior to reduce direct dependence on real identity data.
Abstract: Training robust fingerprint recognizers is often constrained by privacy regulations and the limited intra-class diversity of public datasets. This paper presents a controllable fingerprint synthesis framework based on Stable Diffusion, integrating ControlNet and a Multi-IP-Adapter (derived from IP-Adapter) to generate structure-consistent yet style-diverse fingerprints conditioned on a content image. The framework supports two generation modes: (i) augmenting a real identity with diverse sensor styles, and (ii) privacy-motivated synthesis using virtual identities sampled from a DDPM prior to reduce direct dependence on real identity data. To enable finer controllability, we incorporate ridge-scale normalization and dual-mask spatial injection to better separate ridge regions from the background during generation. Experimental results show high visual fidelity, strong retention of content-guided structural cues under style transfer, and improved downstream recognition performance. In particular, joint training with real and the proposed synthetic data improves ViT TAR@FAR = 0.1% from 80.08% to 91.28%, outperforming joint-training results obtained with FPGAN-Control and PrintsGAN.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 18
Loading