CLiMB: A Continual Learning Benchmark for Vision-and-Language TasksDownload PDF

Published: 17 Sept 2022, Last Modified: 03 Jul 2024NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: Vision-and-language, continual learning, multimodal, lifelong learning
TL;DR: This paper presents CLiMB, a benchmark to study the challenge of learning vision-language tasks in a continual learning setting, and to systematically evaluate how upstream continual learning can rapidly transfer to new multi- and unimodal tasks.
Abstract: Current state-of-the-art vision-and-language models are evaluated on tasks either individually or in a multi-task setting, overlooking the challenges of continually learning (CL) tasks as they arrive. Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks. We present CLiMB, a benchmark to study the challenge of learning multimodal tasks in a CL setting, and to systematically evaluate how upstream continual learning can rapidly generalize to new multimodal and unimodal tasks. CLiMB includes implementations of several CL algorithms and a modified Vision-Language Transformer (ViLT) model that can be deployed on both multimodal and unimodal tasks. We find that common CL methods can help mitigate forgetting during multimodal task learning, but do not enable cross-task knowledge transfer. We envision that CLiMB will facilitate research on a new class of CL algorithms for this challenging multimodal setting.
Author Statement: Yes
URL: https://github.com/GLAMOR-USC/CLiMB
License: MIT License
Supplementary Material: pdf
Contribution Process Agreement: Yes
In Person Attendance: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/climb-a-continual-learning-benchmark-for/code)
16 Replies

Loading