Contrastive-based In-context Learning with Bias Calibration for Implicit Discourse Relation Recognition
Abstract: Implicit discourse relation recognition (IDRR) aims to recognize the discourse relation between two text segments without explicit connective. Although prompt learning-based approaches have achieved significant success in the IDRR task, these methods typically require carefully designed prompt templates for specific tasks, as well as auxiliary tasks that have a positive relationship with the main task, to enhance the performance of prompt learning. Recently, in-context learning (ICL) lets the input be prefaced by explicit guiding demonstrations, which can provide richer contextual information and mitigate its reliance on complex prompts and derived tasks. In this paper, we propose a COntrastive-based IN-context learning with Bias Calibration (COINBC) which utilizes contrastive learning for in-context learning and distinguishes between positive and negative demonstration samples. Additionally, we apply a bias calibration model based on the k-nearest-neighbor (kNN) calibration to mitigate the inherent biases in in-context learning. Experiments are conducted on the PDTB 3.0 corpus, and the results show that our proposed COINBC achieves new state-of-the-art performance for the IDRR task.
External IDs:dblp:journals/npl/WangW26
Loading