Keywords: Model Developmental Safety, Continual Learning, Vision-Language Models, Constrained Optimization
TL;DR: We propose a safety-centric framework to ensure zero-forgetting in iterative model development process by utilizing data-dependent constraints.
Abstract: In the real world, a learning-enabled system usually undergoes multiple cycles of model development to enhance the system's ability to handle difficult or emerging tasks. This continual model development process raises a significant issue that the model development for improving new capabilities may inadvertently lose capabilities of the old model, also known as catastrophic forgetting. Existing continual learning studies focus on mitigating catastrophic forgetting by trading off performance on previous tasks and new tasks to ensure good average performance. However, they are inadequate for many applications especially in safety-critical domains, as failure to strictly preserve the performance of the old model not only introduces safety risks and uncertainties but also imposes substantial expenses in the re-improving and re-validation of existing properties. To address this issue, we introduce **model developmental safety as a guarantee** of a learning system such that the new model should strictly preserve the existing protected capabilities of the old model. To ensure the model developmental safety, we present a safety-centric framework by formulating the model developmental safety as data-dependent constraints and then apply it to developing a pretrained vision-language model (aka the CLIP model) for acquiring new capabilities or improving existing capabilities of image classification. Our experiments on autonomous driving and scene recognition datasets demonstrate the efficacy of the proposed approach.
Submission Number: 96
Loading