everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Estimating the Lipschitz constant of deep neural networks is of growing interest as it is useful for informing on generalisability and adversarial robustness. Convolutional neural networks (CNNs) in particular, underpin much of the recent success in computer vision related applications. Existing methods for estimating the Lipschitz constant can be tight but have limited scalability when applied to CNNs. In this work, we propose a novel method to accelerate Lipschitz constant estimation for CNNs. The core idea is to divide a large convolutional block via a joint layer and width-wise partition, into a collection of smaller blocks. We prove an upper-bound on the Lipschitz constant of the larger block in terms of the Lipschitz constants of the smaller blocks. We demonstrate an enhanced scalability and comparable accuracy to existing baselines through a range of experiments.