Progressive Memory Banks for Incremental Domain AdaptationDownload PDF

Anonymous

16 May 2019 (modified: 22 Oct 2023)AMTL 2019Readers: Everyone
Keywords: natural language processing, domain adaptation
TL;DR: We present a neural memory-based architecture for incremental domain adaptation, and provide theoretical and empirical results.
Abstract: This paper addresses the problem of incremental domain adaptation (IDA). We assume each domain comes sequentially, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the encountered domains. We propose to augment a recurrent neural network (RNN) with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition. The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the model capacity. We learn the new memory slots and fine-tune existing parameters by back-propagation. Experiments show that our approach significantly outperforms naive fine-tuning and previous work on IDA, including elastic weight consolidation and the progressive neural network. Compared with expanding hidden states, our approach is more robust for old domains, shown by both empirical and theoretical results.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1811.00239/code)
0 Replies

Loading