Abstract: Temporal Scoping of Facts is crucial for completing the temporal dimension of knowledge graphs. Current mainstream methods rely heavily on external resources for mining temporal information. However, the presence of noise in external resources, coupled with limitations in adaptively inferring non-continuous temporal dimensions with multiple temporal ranges, leads to low accuracy in predicting temporal ranges. To address these challenges, a model named JammyTS is proposed, which Joins an attention mechanism and a memory network for Temporal Scoping of facts. Specifically, JammyTS leverages attention to adjust the distribution of weights dynamically in memory networks and builds attention capsule-based networks to reduce the impact of noise in external resources. Furthermore, two linear classifiers are separately trained to infer the end and beginning timestamps of facts for inference of non-continuous temporal ranges. Extensive experiments on three datasets show that JammyTS improves the accuracy by up to 12.29% compared to the state-of-the-art.
External IDs:dblp:journals/datamine/HuWLC25
Loading