Keywords: Corruption, Convolution, Transformer, Robustness, Explainability
Abstract: Neural networks are highly susceptible to natural image corruptions such as noise, blur, and weather distortions, limiting their reliability in real-world deployment. The prime reason to maintain the high integrity against natural corruptions is that these distortions are the primary force of distribution shift intentionally (compression) or unintentionally (blur or weather artifacts). For the first time, through this work, we observe that such corruptions often collapse the network's internal feature space into a high-entropy state, causing predictions to rely on a small subset of fragile features. Inspired by this, we propose a simple yet effective entropy-guided fine-tuning framework, Dem-HEC, that strengthens corruption robustness while maintaining clean accuracy. Our method generates high-entropy samples within a bounded perturbation region to simulate corruption-induced uncertainty and aligns them with clean embeddings using a contrastive loss. In parallel, cross-entropy on both clean and high-entropy samples, combined with knowledge distillation from a teacher snapshot, ensures stable predictions. Dem-HEC is evaluated with numerous neural networks trained on multiple benchmark datasets, demonstrating consistent gains across diverse corruption types and their severities (noise strength), with strong transferability across backbones, including CNNs and Transformers. Our approach highlights entropy regularisation as a scalable pathway to bridging the gap between clean accuracy and real-world robustness.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 25238
Loading