Submission Type: Regular Long Paper
Submission Track: Natural Language Generation
Submission Track 2: Information Extraction
Keywords: Grammar-Constrained Decoding, Large Language Model, LLM, Structured NLP, Information Extraction, Entity Disambiguation
TL;DR: We cast structured NLP tasks as constrained decoding from an LLM, where only outputs that conform with a task-specific grammar are allowed. This approach, termed grammar-constrained decoding (GCD), significantly enhances the performance of LLMs.
Abstract: Despite their impressive performance, large language models (LMs) still
struggle with reliably generating complex output structures when not finetuned
to follow the required output format exactly. To address this issue,
grammar-constrained decoding (GCD) can be used to control the generation of
LMs, guaranteeing that the output follows a given structure. Most existing GCD
methods are, however, limited to specific tasks, such as parsing or code
generation. In this work, we demonstrate that formal grammars can describe the
output space for a much wider range of tasks and argue that GCD can serve as a
unified framework for structured NLP tasks in general. For increased
flexibility, we introduce input-dependent grammars, which allow the grammar to
depend on the input and thus enable the generation of different output
structures for different inputs. We then empirically demonstrate the power and
flexibility of GCD-enhanced LMs on (1) information extraction, (2) entity
disambiguation, and (3) constituency parsing. Our results indicate that
grammar-constrained LMs substantially outperform unconstrained LMs or even beat
task-specific finetuned models. Grammar constraints thus hold great promise for
harnessing off-the-shelf LMs for a wide range of structured NLP tasks,
especially where training data is scarce or finetuning is expensive. Code and
data: https://github.com/epfl-dlab/GCD.
Submission Number: 4162
Loading