ADELIE: Aligning Large Language Models on Information Extraction

ACL ARR 2024 April Submission720 Authors

16 Apr 2024 (modified: 08 Jun 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) usually fall short on information extraction (IE) tasks and struggle to follow the complex instructions of IE tasks. This primarily arises from LLMs not being aligned with humans, as mainstream alignment datasets typically do not include IE data. In this paper, we introduce **ADELIE** (**A**ligning large language mo**DEL**s on **I**nformation **E**xtraction), an aligned LLM that effectively solves various IE tasks, including closed IE, open IE, and on-demand IE. We first collect and construct a high-quality alignment corpus IEInstruct for IE. Then we train $ADELIE_{SFT}$ using instruction tuning on IEInstruct. We further train $ADELIE_{SFT}$ with direct preference optimization (DPO) objective, resulting in $ADELIE_{DPO}$. Extensive experiments on various held-out IE datasets demonstrate that our models ($ADELIE_{SFT}$ and $ADELIE_{DPO}$) achieve state-of-the-art (SoTA) performance among open-source models. We further explore the general capabilities of ADELIE, and experimental results reveal that their general capabilities do not exhibit a noticeable decline. We will release the code, data, and models to facilitate further research.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: named entity recognition and relation extraction, event extraction, open information extraction, zero/few-shot extraction, generalization
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 720
Loading