Gradients Through Logarithmic Lens: Reformulating Optimization Dynamics

ICLR 2026 Conference Submission20654 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep Learning, Activation Function, Gradient Descent, Optimization, Neural Networks
Abstract: Optimization in deep learning remains a fundamental challenge, and developing techniques that improve training efficiency and enhance model performance is essential. We present a method for producing effective optimization frameworks, introducing the activation function LogLU (***log**arithmic **l**inear **u**nit's*) and the optimizer ZenGrad (***zen** represents smooth, **grad**ients*), along with its momentum-based variant, M-ZenGrad, all of which incorporate the logarithmic formulation. We conducted extensive evaluations on benchmark datasets spanning vision and language tasks, demonstrating that each component individually enhances performance while collectively showcasing the advantages of the logarithmic approach. Additionally, ablation studies analyze the contribution of each method, and careful hyperparameter tuning ensures robust and optimal performance, indicating the effectiveness of our logarithmic optimization framework across diverse tasks and datasets.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 20654
Loading