Abstract: KG-to-text generation models aim to generate a natural language text with one or more sentences from input knowledge graphs (KGs). Recent models use either entity-level or word-level graphs as input data to generate texts. The Entity-level graph provides the relations between entities but neglects the relations between words within a single entity and different entities. In contrast, a word-level graph can better consider the rich semantic relations among words but can fail to capture the relations between entities. This paper proposes a novel graph-to-sequence model that encodes both entity-level and word-level KGs to enhance node representations and investigates different decoder architectures to integrate contextualized representations of words and entities effectively. Automatic and human evaluations demonstrate that our method outperforms previous single-granularity methods on two popular graph-to-text benchmarks.
0 Replies
Loading