Did You Mean...? Confidence-based Trade-offs in Semantic Parsing

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Human-Centered NLP
Submission Track 2: Dialogue and Interactive Systems
Keywords: calibration, semantic parsing, safety, paraphrasing
TL;DR: We show how model confidence can be used to balance trade-offs in semantic parsing, both for improving safety in user-facing applications and for reducing annotation load.
Abstract: We illustrate how a calibrated model can help balance common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that well-calibrated confidence scores allow us to balance cost with annotator load, improving accuracy with a small number of interactions. We then examine how confidence scores can help optimize the trade-off between usability and safety. We show that confidence-based thresholding can substantially reduce the number of incorrect low-confidence programs executed; however, this comes at a cost to usability. We propose the DidYouMean system which better balances usability and safety by rephrasing low-confidence inputs.
Submission Number: 2025
Loading