Feature-Guided Perturbation for Facial Attribute Classification

Published: 01 Jan 2023, Last Modified: 02 Jul 2025IEEE Trans. Artif. Intell. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Pretrained deep models are widely used in computer vision tasks ranging from face recognition to object classification and attribute prediction. The performance of these models depends heavily on the pretraining dataset, and a shift in the input data adversely affects the model performance. Existing techniques address the problem by updating the pretrained model on the new dataset, most popularly via model fine-tuning. However, this requires updating millions of parameters in the model and is computationally expensive. Therefore, in this research, an algorithm is proposed to address the problem of data shift via perturbation learning without updating the pretrained model parameters. The proposed algorithm shifts the data in the input space to obtain feature-guided perturbed (FGP) data such that when FGP data are given as input to the pretrained model, it results in optimized feature space. Perturbation is learned by minimizing the intraclass distance and maximizing the interclass separation among the classes in the feature space. The proposed algorithm is evaluated on three publicly available datasets, namely, LFW, CelebA, and MUCT. Different experiments and comparisons with existing algorithms, including fine-tuning the model, show the efficacy of the proposed approach. Most importantly, FGP requires fewer parameters to be learned compared to the traditional approaches.
Loading