Inducing Semi-Structured Sparsity by Masking for Efficient Model Inference in Convolutional Networks

Published: 10 Oct 2024, Last Modified: 30 Oct 2024FITML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Convolutional Models, CNNs, Sparsity, Semi-structured Sparsity, Masking, Inference Acceleration, Computer Vision, Efficiency, Foundation Models, Hardware Acceleration
TL;DR: The paper presents a novel method to cheaply learn semi-structured sparsity patterns for convolutional kernels to accelerate CNNs using readily available hardware accelerations while incurring no loss of performance.
Abstract: The crucial role of convolutional models, both as standalone vision models and backbones in foundation models, necessitates effective acceleration techniques. This paper proposes a novel method to learn semi-structured sparsity patterns for convolution kernels in the form of maskings enabling the utilization of readily available hardware accelerations. The approach accelerates convolutional models more than two-fold during inference without decreasing model performance. At the same time, the original model weights and structure remain unchanged keeping the model thus easily updatable. Beyond the immediate practical use, the effect of maskings on prediction is easily quantifiable. Therefore, guarantees on model predictions under maskings are derived showing stability bounds for learned maskings even after updating the original underlying model.
Submission Number: 24
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview