Abstract: Entity Linking (EL) is the process of associating ambiguous textual mentions to specific entities in a knowledge base.
Traditional EL methods heavily rely on large datasets to enhance their performance, a dependency that becomes problematic in the context of few-shot entity linking, where only a limited number of examples are available for training.
To address this challenge, we present OneNet, an innovative framework that utilizes the few-shot learning capabilities of Large Language Models (LLMs) without the need for fine-tuning.
To the best of our knowledge, this marks a pioneering approach to applying LLMs to few-shot entity linking tasks.
OneNet is structured around three key components prompted by LLMs:
(1) an entity reduction processor that simplifies inputs by summarizing and filtering out irrelevant entities, (2) a dual-perspective entity linker that combines contextual cues and prior knowledge for precise entity linking, and (3) an entity consensus judger that employs a unique consistency algorithm to alleviate the hallucination in the entity linking reasoning.
Comprehensive evaluations across six benchmark datasets reveal that OneNet outperforms current state-of-the-art entity linking methods.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: entity linking, few-shot extraction
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 97
Loading