Bring Complex Geometric Information to LLMs: A Positional Survey of Graph Parametric Representation

Published: 23 Oct 2025, Last Modified: 23 Oct 2025LOG 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Parametric Representation, Language Model
Abstract: Graphs, as a relational data structure, have been widely used in various application scenarios, such as molecule design and recommender systems. Recently, large language models (LLMs) are reorganizing in the AI community due to their strong reasoning and inference capabilities. Enabling LLMs to effectively process graph-structured data holds significant potential. Applications include: (1) distilling external knowledge bases to mitigate hallucination and overcome the context window limitation in retrieval-augmented generation; and (2) directly addressing graph-centric tasks such as protein design and drug discovery. However, feeding raw graph data into LLMs is impractical. Graphs often have complex topologies, large scale, and lack efficient semantic representations, all of which hinder their direct integration with LLMs. This raises a key question: can graph representations be expressed in natural language while still encoding rich structural and geometric information suitable for LLM input? One promising direction is the use of **graph parametric representation** or **graph law**. These approaches predefine a set of parameters (e.g., degree, diameter, temporal dynamics) and establish their values and relationships by analyzing distributions across real-world graphs. Such parametric representations may offer a natural bridge for LLMs to understand complex graph structures and perform corresponding inferences. Therefore, in this survey, we first review four categorical of current efforts of incorporating graph data into LLMs, i.e., topological query, semantic query, GNN embedding, and GNN prediction, highlighting their limitations. Then, we introduce graph parametric representation from multiple perspectives, including macroscopic vs. microscopic views, low-order vs. high-order structures, and static vs. temporal graphs. Finally, we conclude the paper with future research directions.
Submission Type: Full paper proceedings track submission (max 9 main pages).
Submission Number: 10
Loading