Promoting Structure-awareness of Large Language Model for Graph-to-text GenerationDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Recent advancement of Large Language Models (LLMs) has remarkably pushed the boundaries towards artificial general intelligence (AGI), with their exceptional generation and reasoning abilities. Despite this progress, a critical gap remains in employing LLMs to proficiently understand graph data. In this paper, we propose a new framework, named StructLLM to enhance the graph capabilities of large language models. Our framework first uses a structure-aware pre-training stage to pre-train a graph model to capture the structural information. Subsequently, we introduce four structure-aware instruction tasks to train a graph-to-text projector which bridges the domain gap between graph and text. Finally, we fine-tune our system on the AMR-to-text and Kg-to-text generation tasks.Experimental results that our model obtains significantly better results compared to fine-tuned LLMs, surpassing state-of-the-art systems.Further analysis shows that our model can better process complex graphs.
Paper Type: long
Research Area: Generation
Contribution Types: NLP engineering experiment
Languages Studied: English
0 Replies

Loading