Strategies for Span Labeling with Large Language Models

ACL ARR 2026 January Submission10075 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: span labeling, large language models, named entity recognition, grammatical error correction, error detection, constrained decoding
Abstract: Large language models (LLMs) are increasingly used for text analysis tasks, such as named entity recognition or error detection. Unlike encoder-based models, however, generative architectures lack an explicit mechanism to refer to specific parts of their input. This leads to a variety of ad-hoc prompting strategies for span labeling, often with inconsistent results. In this paper, we categorize these strategies into three families: tagging the input text, indexing numerical positions of spans, and matching span content. To address the limitations of content matching, we introduce LogitMatch, a new constrained decoding method that forces the model's output to align with valid input spans. We evaluate all methods across four diverse tasks. We find that while tagging remains a robust baseline, LogitMatch improves upon competitive matching-based methods by eliminating span matching issues and outperforms other strategies in certain conditions.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: grammatical error correction, fact checking, rumor/misinformation detection, named entity recognition and relation extraction, multilingual extraction
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English, Czech, Spanish, Hindi, Icelandic, Japanese, Ukrainian, Chinese, Cebuano, Danish, German, Croatian, Narabizi, Portuguese, Russian, Slovak, Serbian, Swedish, Tagalog
Submission Number: 10075
Loading