Abstract: Continual relation extraction (CRE) aims to assimilate constantly emerging new relations while avoiding forgetting previously learned relations. Existing works often rely on storing and replaying a fixed set of typical samples to prevent catastrophic forgetting. However, repeatedly replaying these samples may cause the biased latent features problem. In this paper, we find that the representations of memory samples will gradually lose representativeness and diversity in the process of repeated replay. This representation bias will seriously affect the performance of the CRE model. To address this challenge, we propose a novel CRE framework based on dynamic memory. Specifically, we propose Large Language Model (LLM) based concept-aware dynamic memory optimization and optimized relation prototype to mitigate the effects of biased representations of memory samples. The former provides more appropriate training samples for replay training and the latter generates more accurate relation prototypes for the prediction. Our experimental results demonstrate the effectiveness of our method in mitigating biased feature representations to overcome catastrophic forgetting.
Loading