CONTINUAL MODEL EVOLVEMENT WITH INNER-PRODUCT RESTRICTIONDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: With pre-trained model's rapid deployment in natural language processing (NLP) applications, it is intuitive to expect these models can continuously evolve when the task requires more complicated inference ability of the model. Existing continual learning (CL) problem setups and methods focus on fixing out-of-distribution (OOD) data streams which cannot solve such a new challenge. We propose a continual model evolvement problem formulation (CME) that introduces a new challenge for fine-tuned pre-trained models that requires them to evolve during deployment. We formulate the problem and introduce multiple metrics to assess current CL methods from different aspects. Further, we propose a strong method dubbed inner-product restriction as a headstart in solving the CME problem. Experimental results indicate that the CME is still challenging to current deployed pre-trained models while our proposed method can provide a strong boost based on previous CL methods, supporting that it is of great need to explore the CME challenge for better deployment of pre-trained models in NLP applications.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
4 Replies

Loading