Abstract: Recent research works on textual Aspect-Based Sentiment Analysis (ABSA) have achieved promising performance.
However, a persistent challenge lies in the limited semantics derived from the raw data. To address this issue, researchers have explored enhancing textual ABSA with additional augmentations, they either craft audio, text and linguistic features based on the input, or rely on user-posted images.
Yet these approaches have their limitations: the former three formations are heavily overlapped with the original data, making them hard to be supplementary while the user-posted images are extremely dependent on human annotation, not only limits its application scope to just a handful text-image datasets, but also propagating the errors derived from human mistakes to the entire downstream loop.
In this study, we explore the way of generating the sentimental image that no one has ever ventured before.
We propose a novel Sentimental Image Generation method that can precisely provide ancillary visual semantics to reinforce the textual extraction as shown in Figure 1.
Extensive experiments build a new SOTA performance in ACOS, ASQP and en-Phone datasets, underscoring the effectiveness of our method and highlighting a promising direction for expanding our features.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: argument mining;
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 767
Loading