Keywords: Artificial intelligence, Anderson extrapolation, deep equilibrium, high performance computing
TL;DR: We present a novel method for Anderson-accelerated training and inferences using deep equilibrium networks, identifying a speedup-accuracy tradeoff and crossover point where maximum speedup could be achieved with up to 90% compute saved.
Abstract: We present a novel approach for accelerating AI performance by leveraging Anderson extrapolation, a vector-to-vector mapping technique based on a window of historical iterations. By identifying the crossover point where a mixing penalty is incurred, the method focuses on reducing iterations to convergence, with fewer more compute-intensive but generally cacheable iterations, balancing speed and memory usage with accuracy and algorithmic stability, respectively. We demonstrate significant improvements, in both training and inference, motivated by scalability and efficiency extensions to the realm of high-performance computing (HPC).
Submission Number: 1
Loading