Beyond ID Bias: PCA-Guided Dropout for Robust Fine-tuning

Published: 06 Mar 2025, Last Modified: 29 Mar 2025SCSL @ ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Track: regular paper (up to 6 pages)
Keywords: Robust Finetuning, Out of distribution Generalization, Vision Language Model
Abstract: Fine-tuning large-scale pre-trained models often improves in-distribution (ID) performance at the cost of out-of-distribution (OOD) generalization due to overfitting to ID-specific features. To mitigate this, we propose **PCA Dropout**, a novel fine-tuning strategy that suppresses ID-specific feature dependencies by leveraging Principal Component Analysis (PCA). Our method identifies dominant feature components that contribute the most to ID variance and applies structured dropout to reduce their influence, encouraging the model to learn more generalizable representations. We evaluate PCA Dropout on DomainNet and iWildCam using CLIP-based models, demonstrating consistent improvements in OOD robustness over state-of-the-art fine-tuning methods while maintaining strong ID accuracy. Ablation studies further confirm that structured dropout at the feature level outperforms unstructured feature suppression and random dropout strategies.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Presenter: ~Bo_Fei1
Format: Maybe: the presenting author will attend in person, contingent on other factors that still need to be determined (e.g., visa, funding).
Funding: Yes, the presenting author of this submission falls under ICLR’s funding aims, and funding would significantly impact their ability to attend the workshop in person.
Submission Number: 26
Loading