Abstract: Entity linking (EL) is a challenging task as it typically requires matching an ambiguous entity mention with its corresponding entity in a knowledge base (KB). The mainstream studies focus on learning and evaluating linking models on the same corpus and obtained significant performance achievement, however, they often overlook the generalization ability to out-of-domain corpus, which is more realistic yet much more challenging. To address this issue, we introduce a novel neural-symbolic model for entity linking, which is inspired by the symbol-manipulation mechanism in human brains. Specifically, we abstract diverse features into unified variables, then combine them using neural operators to capture diverse relevance requirements, and finally aggregate relevance scores through voting. We conduct experiments on eleven benchmark datasets with different types of text, and the results show that our method outperforms nearly all baselines. Notably, the best performance of our method on seven out-of-domain datasets highlights its generalization ability.
Loading