Constrained Implicit Learning Framework for Neural Network Sparsification

Published: 05 Sept 2024, Last Modified: 16 Oct 2024ACML 2024 Conference TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: implicit models; neural network sparsification; constrained LASSO
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
Abstract: This paper presents a novel approach to sparsify neural networks by transforming them into implicit models characterized by an equilibrium equation rather than the conventional hierarchical layer structure. Unlike traditional sparsification techniques reliant on network structure or specific loss functions, our method simplifies the process to a simple constrained least-squared problem with sparsity-inducing constraints or penalties. Additionally, we introduce a scalable algorithm that can be parallelized, addressing the computational complexities associated with this transformation while maintaining efficiency. Experimental results on CIFAR-100 and 20NewsGroup datasets demonstrate the high effectiveness of our method, particularly in scenarios with high pruning rates. This approach offers a versatile and efficient solution for neural network parameter reduction. Furthermore, we observe that a moderate subset of the training data suffices to achieve competitive performance, highlighting the robustness and information-capturing capability of our approach.
A Signed Permission To Publish Form In Pdf: pdf
Primary Area: Deep Learning (architectures, deep reinforcement learning, generative models, deep learning theory, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: Yes
Submission Number: 8
Loading