Abstract: In class-incremental learning, an agent with limited resources needs to learn a sequence of classification tasks, forming an ever growing classification problem, with the constraint of not being able to access data from previous tasks. The
main difference with task-incremental learning,
where a task-ID is available at inference time, is
that the learner also needs to perform cross-task
discrimination, i.e. distinguish between classes
that have not been seen together. Approaches
to tackle this problem are numerous and mostly
make use of an external memory (buffer) of nonnegligible size. In this paper, we ablate the learning of cross-task features and study its influence
on the performance of basic replay strategies used
for class-IL. We also define a new forgetting measure for class-incremental learning, and see that
forgetting is not the principal cause of low performance. Our experimental results show that
future algorithms for class-incremental learning
should not only prevent forgetting, but also aim
to improve the quality of the cross-task features,
and the knowledge transfer between tasks. This is
especially important when tasks contain limited
amount of data.
0 Replies
Loading