[Re] BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning

Purdue University ML 2023 Hackathon Reproducibility Challenge Submission4 Authors

09 Nov 2023 (modified: 26 Nov 2023)Purdue University ML 2023 Hackathon Reproducibility Challenge SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Vision Transformer, Replication Study
TL;DR: Novel method which introduces semantic memory to retain past information from previous task which is often lost during continual learning
Abstract: One important challenge faced in continual learning is the retention of information from previous tasks, a phenomenon called catastrophic forgetting. In contrast, humans excel in learning without such setbacks. This is possible due to the human brain's rehearsal of abstract representations through a complementary learning system. BiRT tries to introduce the concept of a complementary learning system in Vision Transformers. It aims to tackle overfitting and lack of diversity in the rehearsal of representations by introducing various constructive noises at different stages of the architecture.
Submission Number: 4
Loading