Exploiting Joint Influence of Inter- and Intra-Clause Dependencies Towards Enhanced Emotion Cause Extraction
Abstract: Emotion cause extraction (ECE) is vital in understanding the triggers of emotions portrayed in text, thereby enriching user interaction and relatability. Traditional methods such as rule-based and lexical matching struggle due to the paucity of lexical similarity between causes and emotions. While machine learning and deep learning have improved by modeling sequential dependencies and multilevel representations, they still fall short in handling long-distance dependencies and interclause interactions. Attention-based techniques have demonstrated efficacy in refining semantic context over longer text spans, but they frequently suffer from scalability issues for moderate-sized datasets. To tackle the aforementioned shortcomings, we propose a novel model incorporating inter and intraclause relationships, improving the identification of causes behind emotions. Our approach aims to capture fine-grained interactions within the clause (intra) using a dual-attention mechanism involving both cross- and self-attention on multisource features. This addresses scalability by incorporating numerical features as auxiliary information. To capture the dependencies between clauses (inter), we employ a contrastive learning approach to group relevant clauses together in feature space. This helps address long-distance dependency issues both within and between clauses. Through extensive experimentation on the RECCON benchmark dataset, we depict the effectiveness of our strategy in increasing the F1-score of the ECE task by $20{-}25$%. We also extend the evaluation of our model by depicting its application in the empathetic response generation task. The derived causes by our model, when used in response generation, enhance the empathy of a response.
External IDs:doi:10.1109/tcss.2025.3636467
Loading