Unified Odd-Descent Regularization for Input Optimization

Published: 11 Feb 2025, Last Modified: 15 May 2025OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: Activation-descent regularization is a crucial approach in input optimization for ReLU networks, but traditional methods face challenges. Converting discrete activation patterns into differentiable forms introduces half-space division, high computational complexity, and instability. We propose a novel local descent regularization method based on a network of arbitrary odd functions, which unifies half-space processing, simplifies expression, reduces computational complexity, and enriches the expression of the activation descent regularization term. Furthermore, by selecting an arbitrary differentiable odd function, we can derive an exact gradient descent direction, solving the non-differentiability problem caused by the non-smooth nature of ReLU, thus improving optimization efficiency and convergence stability. Experiments demonstrate the competitive performance of our approach, particularly in adversarial learning applications. This work contributes to both theory and practice of regularization for input optimization.
Loading