Abstract: This paper proposes a method to restore model parameters after unintended training to those before such training by error correction. Restoring model parameters with error-correcting code helps the model avoid relearning from scratch and storing model parameters in the cloud. Error correction restores a message even with some errors through its transmission. Decoding the codeword generated by encoding a message corrects errors in the message. In this paper, we regard parameter changes caused by training as errors and restore the changed parameters to their prior by error correction. The proposed method can reduce the error-correcting costs while maintaining the accuracy of model inference by restoring only part of the more critical parameters. The metrics to select parameters for error correction are the influence of a parameter on training. We apply a technique of model pruning that observes the backpropagation process to define the influence. We experimentally evaluated the proposed error correction for model parameters under two scenarios: injecting incorrect data to mislead models and specializing the model, namely overfitting, by learning more epochs than necessary. We used Convolutional Neural Networks (CNNs) and vision transformer (ViT) trained to classify images in the experiment. The experimental results show that error correction restored inference outputs under the scenarios up to 60.75% of the inference outputs before learning the scenarios, even when it saved more than 33.97% of the computational costs and corrected errors of the 80% parameters.
Loading