BenchCLAMP: A Benchmark for Evaluating Language Models on Syntactic and Semantic ParsingDownload PDF

Anonymous

16 Dec 2022 (modified: 05 May 2023)ACL ARR 2022 December Blind SubmissionReaders: Everyone
Abstract: Recent work has shown that generation from a prompted or fine-tuned language model canperform well at semantic parsing when the output is constrained to be a valid semantic representation. We introduce BenchCLAMP, a Benchmark to evaluate Constrained LAnguageModel Parsing, that includes context-free grammars for seven semantic parsing datasets andtwo syntactic parsing datasets with varied output meaning representations, as well as a constrained decoding interface to generate only valid outputs covered by these grammars. Weprovide low, medium, and high resource splits for each dataset, allowing accurate comparisonof various language models under different data regimes. Our benchmark supports evaluation oflanguage models using prompt-based learning as well as fine-tuning. We benchmark sevenlanguage models, including two GPT-3 variants available only through an API. Our experiments show that encoder-decoder pretrained language models can achieve similar performance or even surpass state-of-the-art methods for both syntactic and semantic parsing when the model output is constrained to be valid.
Paper Type: long
Research Area: Resources and Evaluation
0 Replies

Loading