Adaptive Bi-SADA: Bidirectional Structure-Aware Data Augmentation for Robust Aspect Sentiment Quadruplet Prediction

ACL ARR 2026 January Submission609 Authors

23 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Aspect-Based Sentiment Analysis, Adversarial Robustness, Syntactic Perturbation, Instruction Tuning, Stress Testing
Abstract: Large Language Models (LLMs) have achieved state-of-the-art performance in Aspect Sentiment Quadruplet Prediction (ASQP). However, we argue that this success often relies on superficial positional heuristics rather than robust structural reasoning. In this paper, we propose a Probe-and-Cure framework to scrutinize and enhance LLM robustness. First, we introduce a Multi-Surgical Adversarial Attack protocol as a diagnostic tool. Our study reveals that SOTA models are ``Fragile Giants'': they suffer severe performance degradation when facing logical distractors and deep syntactic embeddings. To address this, we propose Adaptive Bi-SADA (Bidirectional Structure-Aware Data Augmentation). Unlike uniform augmentation strategies, our method constructs a length-aware curriculum: it applies natural structural hardening to short sentences to prevent overfitting, and syntactic normalization to long sentences to distill core dependencies. We implement a strict generation-time verification protocol to ensure semantic invariance. Experiments on ASQP and ACOS tasks demonstrate that our method not only achieves new SOTA F1 scores but also effectively transforms superficial pattern matching into robust structural reasoning, significantly closing the performance gap under adversarial stress tests.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, evaluation methodologies
Contribution Types: Data resources, Data analysis
Languages Studied: English
Submission Number: 609
Loading