Lipschitz Multiscale Deep Equilibrium Models: A Theoretically Guaranteed and Accelerated Approach
TL;DR: We redesign the architecture of deep equilibrium models based on the Banach fixed-point theorem, guaranteeing convergence and achieving significant runtime reduction.
Abstract: Deep equilibrium models (DEQs) achieve infinitely deep network representations without stacking layers by exploring fixed points of layer transformations in neural networks.
Such models constitute an innovative approach that achieves performance comparable to state-of-the-art methods in many large-scale numerical experiments, despite requiring significantly less memory.
However, DEQs face the challenge of requiring vastly more computational time for training and inference than conventional methods, as they repeatedly perform fixed-point iterations with no convergence guarantee upon each input.
Therefore, this study explored an approach to improve fixed-point convergence and consequently reduce computational time by restructuring the model architecture to guarantee fixed-point convergence.
Our proposed approach for image classification, Lipschitz multiscale DEQ, has theoretically guaranteed fixed-point convergence for both forward and backward passes by hyperparameter adjustment, achieving up to a 4.75$\times$ speedup in numerical experiments on CIFAR-10 at the cost of a minor drop in accuracy.
Submission Number: 1124
Loading