RATT: Recurrent Attention to Transient Tasks for Continual Image CaptioningDownload PDF

12 Jun 2020 (modified: 29 Sept 2024)LifelongML@ICML2020Readers: Everyone
Student First Author: Yes
Keywords: Continual learning, image captioning, LSTMs, catastrophic forgetting
Previously Published: Work submitted to NeurIPS 2020
Abstract: Research on continual learning has led to a variety of approaches to mitigating catastrophic forgetting in feed-forward classification networks. Until now surprisingly little attention has been focused on continual learning of recurrent models applied to problems like image captioning. In this paper we take a systematic look at continual learning of LSTM-based models for image captioning. We propose an attention-based approach that explicitly accommodates the transient nature of vocabularies in continual image captioning tasks -- i.e. that task vocabularies are not disjoint. We call our method Recurrent Attention to Transient Tasks (RATT), and also show how to adapt continual learning approaches based on weight regularization and knowledge distillation to recurrent continual learning problems. We apply our approaches to incremental image captioning problem on two new continual learning benchmarks we define using the MS-COCO and Flickr30 datasets. Our results demonstrate that RATT is able to sequentially learn five captioning tasks while incurring no forgetting of previously learned ones.
TL;DR: We analyze catastrophic forgetting in recurrent LSTM networks for image captioning and propose an attention-based approach to avoiding it for continual image captioning problems
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/ratt-recurrent-attention-to-transient-tasks/code)
0 Replies

Loading