Keywords: Conversational Entailment, Knowledge Graphs, LMMs, GNNs, NER
Abstract: Conversation entailment, the task of determining if a hypothesis can be inferred from a multi-turn dialogue, presents challenges due to the complex nature of conversational dynamics. Transformer-based models like BERT excel in capturing language patterns and have shown strong performance in entailment tasks. However, as highlighted by Storks and Chai (2021), these models often lack coherence in intermediate reasoning, relying on spurious correlations that undermine interpretability and trust. To address this, we proposed augmenting transformers with instance-specific knowledge graphs to enhance reasoning coherence and accuracy. While our approach demonstrated improvements in accuracy and coherence metrics, the complexity and computational overhead involved suggest that the gains may not justify the additional effort for most applications.
Code for our project can be found in our GitHub repository: https://github.com/jack2kiwi/NLP-595-Conversational_Entailment_Instance_Specific_KG
Archival Option: Yes
Submission Number: 5
Loading