Abstract: We introduce Progressive Prompts – a simple and efficient approach for continual learning in language models. Our method allows forward transfer and resists catastrophic forgetting, without relying on data replay or a large number
of task-specific parameters. Progressive Prompts learns a new soft prompt for
each task and sequentially concatenates it with the previously learned prompts,
while keeping the base model frozen. Experiments on standard continual learning
benchmarks show that our approach outperforms state-of-the-art methods, with an
improvement >20% in average test accuracy over the previous best-preforming
method on T5 model. We also explore a more challenging continual learning
setup with longer sequences of tasks and show that Progressive Prompts significantly outperforms prior methods.
0 Replies
Loading