Keywords: gradient method, SD method, successive gradient stack, r value
Abstract: Selecting the search direction is crucial,common gradient-based methods typically exert a strong influence on large eigenvalues but leave small-eigenvalue components almost unchanged. If we can effectively accelerate the descent along the small eigenvalue directions, the overall convergence speed can be significantly improved. Inspired by this, we propose a gradient scheme called CGS(Continuous Gradient Stacking). It constructs a new search vector by combining gradients from several consecutive iterations with different weights. along each eigenvalue this vector responds differently—large for small eigenvalues and small for large ones.Comparisons with BB, CBB and other standard methods demonstrate that CGS offers substantial advantages.
Primary Area: optimization
Submission Number: 16794
Loading