Aligning the Representation of Knowledge Graph and Large Language Model for Causal Question Answering
Abstract: Causal Question Answering (CQA) is essential for knowledge discovery, focusing on the intricate dynamics between events and entities without predefined contexts. Despite advancements of CQA models through Knowledge Graphs (KGs) and Pre-Trained Language Models (PLMs), existing approaches are hindered by knowledge conflict, insufficient capacity, and limitations in information fusion. Large Language Models (LLMs) have significantly improved natural language understanding and reasoning but often suffer from causal hallucinations. To address these challenges, we introduce KLop, a framework that aligns representations of Causal Knowledge Graph (CKG) and Large Language Models for CQA. KLop pre-trains a graph embedding model for entity embedding and uses a frozen LLM for text embedding. The main components of KLop are the descriptor module and the aligner module. The descriptor leverages descriptive texts generated by LLMs to create training data for knowledge alignment, while the aligner utilizes self-attention to train query tokens for modality alignment. Experiments on public CQA datasets validate that KLop outperforms various advanced baselines in reasoning accuracy, as well as achieving causal knowledge integration and joint reasoning.
External IDs:dblp:conf/bigdataconf/Zeng0HLSZ24
Loading