Mixed Setting Training Methods for Incremental Slot-Filling TasksDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: incremental learning, slot-filling
TL;DR: We provide a method-agnostic algorithm for deciding when to incrementally train versus fully train and it provides a significant speedup over fully training and avoids catastrophic forgetting
Abstract: Model training remains a dominant financial cost and time investment in machine learning applications. Developing and debugging models often involve iterative training, further exacerbating this issue. With growing interest in increasingly complex models, there is a need for techniques that help to reduce overall training effort. While incremental training can save substantial time and cost by training an existing model on a small subset of data, little work has explored policies for determining when incremental training provides adequate model performance versus full retraining. We provide a method-agnostic algorithm for deciding when to incrementally train versus fully train. We call this setting of non-deterministic full- or incremental training ``Mixed Setting Training". Upon evaluation in slot-filling tasks, we find that this algorithm provides a bounded error, avoids catastrophic forgetting, and results in a significant speedup over a policy of always fully training.
Original Pdf: pdf
4 Replies

Loading