Abstract: Much of Abstract Meaning Representation (AMR) parsing is currently concentrated on fine-tuning pre-trained language models. Large Language Models (LLMs) bring a new paradigm for NLP research, prompting. LLMs also show impressive 'reasoning' capabilities and a certain kind of interpretability with Chain-of-Thought (CoT) prompting. In this paper, we apply a variety of prompting strategies to induce GPT to do AMR parsing. We demonstrate that GPT models are insufficient as AMR parsers, but CoT prompting may shed light on how errors arise.
Paper Type: Long
Research Area: Syntax: Tagging, Chunking and Parsing
Research Area Keywords: semantic parsing, prompting, explanation faithfulness, free-text/natural language explanations
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 2901
Loading