Abstract: DNN repair is an effective technique applied after training to enhance the class-specific accuracy of classifier models, where a low failure rate is required on specific classes. The repair methods introduced in recent studies assume that they are applied to fully trained models. In this paper, we argue that this could not always be the best choice. We analyse the performance of DNN models under various training times and repair combinations. Through meticulously designed experiments on two real-world datasets and a carefully curated assessment score, we show that applying DNN repair earlier in the training process, and not only at its end, can be beneficial. Thus, we encourage the research community to consider the idea of when to apply DNN repair in the model development.
Loading