Abstract: Neural networks (NN) often need to be lightweight and fast for practical deployment. Various NN compression techniques, including tensor decomposition, aim to achieve this goal. Tensor decomposition involves computing a decomposition of the weight tensor of a layer and replacing it with smaller layers using the decomposed weights. Subsequently, fine-tuning is performed to regain lost accuracy. However, while NNs are known to exhibit robustness issues, tensor decomposition for compression has primarily been evaluated on accuracy. In this study, we investigate the impact of tensor decomposition on the robustness of large CNN (Convolutional Neural Network) models. Through multiple experiments on different models trained on ImageNet, we demonstrate that tensor decomposition preserves model robustness. Furthermore, we observe that the choice of fine-tuning learning rate plays a crucial role in determining robustness. A high learning rate may enhance accuracy but significantly compromises robustness. Conversely, a low learning rate can effectively restore model robustness, albeit with a smaller accuracy improvement. These findings offer a practical approach to preserving model robustness without resorting to adversarial learning, thus eliminating the need for additional knowledge about the defense methods used in the original model.
Loading