Rectifying Adaptive Learning Rate Variance via Confidence Estimation

ICLR 2026 Conference Submission24722 Authors

20 Sept 2025 (modified: 02 Dec 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: physics-informed
Abstract: Recent advances in training physics-informed neural networks (PINNs) highlight the effectiveness of second-order optimization methods. Adaptive variants such as AdaHessian, Sophia, and SOAP leverage approximate curvature information to achieve strong performance on challenging benchmarks. However, adaptive optimizers are prone to instability during the early stages of training—a limitation addressed in part by RAdam through rectification of the adaptive learning rate. We introduce Adaptive Confidence Rectification (ACR), a novel uncertainty-aware rescaling mechanism that enhances RAdam’s rectification strategy by dynamically adjusting the learning-rate correction based on an empirical measure of confidence. Our method integrates seamlessly with diverse optimizers and training regimes, consistently improving convergence stability and optimization accuracy. Extensive experiments on large-scale PINN tasks demonstrate reliable performance gains over both rectified and non-rectified baselines, establishing ACR as a robust and broadly applicable optimization framework.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 24722
Loading