Enhancing LLM's Dialogue State Tracking Performance via a Novel LoRA Finetuning Method

ACL ARR 2025 February Submission8428 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The rapid development of large language models (LLMs) has increasingly positioned them as crucial components in task-oriented dialog (TOD), enabling more flexible task completion. However, the substantial size of LLMs incurs significant resource consumption during full-parameter fine-tuning. Against this backdrop, parameter-efficient fine-tuning methods have garnered attention, with LoRA being particularly noteworthy. However, LoRA is not without limitations; it overlooks the varying importance of different weight parameters. Inspired by LoRA, we introduce a novel importance assessment method, Sensitivity Under Cooperative Game (SUCG), which is applied to the Dialogue State Tracking (DST) module within TOD for task evaluation. Extensive experiments have validated that our innovation effectively enhances model performance and efficiency in natural language processing. This work provides new insights for the future development of the DST module.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: parameter-efficient-training
Contribution Types: Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 8428
Loading