Exploring Rollback Inference for Aspect-based Sentiment Analysis

ACL ARR 2024 June Submission3638 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: With the giant help from pre-trained large language models (LLMs), templated sequence of how to organize the aspect-level elements become the hottest research target while only a few of them move their steps to inference, not to mention utilizing the semantic connection between aspect-level elements during it. We argue that, compared with the high computational cost methods of training language models, considering the inference process can also bring us potential benefits. Motivated by this, we propose rollback inference for aspect-based sentiment analysis, which can boost the performance of fine-tuned LLMs with a tiny cost, and adapt to various language models. Specifically, We first propose a novel entropy-based rollback inference framework that manipulates multi-reasoning and voting over the uncertain parts of the sequence with model's self-consistency. We then explore the possibility of capturing the correlations among elements during inference with a set of rollback strategies. Extensive experiments in several benchmarks underscore the robustness and effectiveness of our proposed rollback strategies and the value of the semantic connections in inference.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: argument generation, argument mining
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 3638
Loading