Learning to Extract Structured Entities Using Language Models

ACL ARR 2024 June Submission2457 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent advances in machine learning have significantly impacted the field of information extraction, with Language Models (LMs) playing a pivotal role in extracting structured information from unstructured text. Prior works typically represent information extraction as triplet-centric and use classical metrics such as precision and recall for evaluation. We reformulate the task to be entity-centric, enabling the use of diverse metrics that can provide more insights from various perspectives. We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP (AESOP) metric, designed to appropriately assess model performance. Later, we introduce a new model that harnesses the power of LMs for enhanced effectiveness and efficiency by decomposing the extraction task into multiple stages. Quantitative and human side-by-side evaluations confirm that our model outperforms baselines, offering promising directions for future advancements in structured entity extraction. Our source code and datasets are available at this anonymous link: https://anonymous.4open.science/r/Learning-to-Extract-Structured-Entities-Using-Language-Models-7310/README.md.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: Information Extraction, Machine Learning for NLP
Contribution Types: Theory
Languages Studied: English
Submission Number: 2457
Loading