Self-Adaptive Revisiting Awareness for Enhancing Robustness and Generalization in Classification Task
Abstract: Deep neural networks are increasingly deployed in critical applications, yet their vulnerability to boundary samples—those near uncertain decision regions—remains a challenge. We propose a Self-Adaptive Revisiting Awareness (SARA) strategy to enhance robustness by dynamically focusing on such samples during training. Central to SARA is the novel Confidence Guard loss, which identifies low-confidence or misclassified samples and augments them with valuable alternatives. These alternatives are discovered through gradient-based directions and adaptively integrated into the training mini-batch, effectively expanding the batch size on the fly. Unlike conventional training, this self-adaptive mechanism enables the model to revisit uncertain regions, improving awareness of the data distribution and reinforcing decision boundaries. By concentrating training on manifold boundary samples, SARA strengthens both adversarial robustness and generalization. Experiments on multiple benchmark datasets demonstrate significant performance improvements, with gains of 2% to 16% in classification accuracy. These results highlight SARA as a practical and effective approach for building resilient neural networks, with strong potential for integration into future robust learning strategies.
Loading