Balancing Model Efficiency and Performance: Adaptive Pruner for Long-tailed Data

Published: 01 May 2025, Last Modified: 23 Jul 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Long-tailed distribution datasets are prevalent in many machine learning tasks, yet existing neural network models still face significant challenges when handling such data. This paper proposes a novel adaptive pruning strategy, LTAP (Long-Tailed Adaptive Pruner), aimed at balancing model efficiency and performance to better address the challenges posed by long-tailed data distributions. LTAP introduces multi-dimensional importance scoring criteria and designs a dynamic weight adjustment mechanism to adaptively determine the pruning priority of parameters for different classes. By focusing on protecting parameters critical for tail classes, LTAP significantly enhances computational efficiency while maintaining model performance. This method combines the strengths of long-tailed learning and neural network pruning, overcoming the limitations of existing approaches in handling imbalanced data. Extensive experiments demonstrate that LTAP outperforms existing methods on various long-tailed datasets, achieving a good balance between model compression rate, computational efficiency, and classification accuracy. This research provides new insights into solving model optimization problems in long-tailed learning and is significant for improving the performance of neural networks on imbalanced datasets. The code is available at https://github.com/DataLab-atom/LT-VOTE.
Lay Summary: In the real world, AI models struggle with imbalance; they learn to see common things like cats and dogs but overlook rare ones like endangered species. This problem is often worsened when we try to make models more efficient through "pruning," a process that can accidentally erase the very knowledge needed to identify these rare cases. We developed an intelligent pruning strategy called LTAP. It acts with surgical precision on the model’s knowledge, first identifying which parts are essential for recognizing rare classes and then carefully protecting them. This ensures that only true redundancy is trimmed away. The result is an AI that is not only much smaller and faster but also better at its job, especially on rare categories. This breakthrough allows us to create compact, fair, and dependable AI for devices with limited power, enabling them to handle critical tasks like spotting unusual hazards for self-driving cars or finding minute flaws in manufacturing.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/DataLab-atom/LT-VOTE
Primary Area: General Machine Learning->Representation Learning
Keywords: Neural network pruning,Long-tail learning
Submission Number: 15821
Loading