Abstract: Continual relation extraction incrementally learns to extract the relations between entities from unstructured text with a series of tasks. The state-of-the-art methods of continual relation extraction are based on memory replay, which allocate a fixed memory for each continuously coming task to store part of the training data and replay them in subsequent tasks. However, memory resources are usually limited in real scenarios. Existing methods haven’t considered the limitation and use efficiency of memory. This paper introduces cost-effective memory replay (CEMR) based on existing methods, which stores as much training data as possible for each task and adopts effective strategies of samples selection and replacement. CEMR efficiently uses limited memory for memory replay and knowledge consolidation in the new tasks, which alleviates the catastrophic forgetting problem. Experiments are conducted on three different relation extraction datasets using multiple comparison methods. The final results show that CEMR outperforms the state-of-the-art methods, which proves the effectiveness of CEMR.
0 Replies
Loading