Abstract: Large language models (LLMs) have recently achieved remarkable success in various reasoning tasks in the field of natural language processing.
This success of LLMs has also motivated their use in graph-related tasks.
Among others, recent work has explored whether LLMs can solve graph problems such as counting the number of connected components of a graph or computing the shortest path distance between two nodes.
Although LLMs possess preliminary graph reasoning abilities, they might still struggle to solve some seemingly simple problems.
In this paper, we investigate whether prompting via pseudo-code instructions can improve the performance of LLMs in solving graph problems.
This approach not only aligns the model's reasoning with algorithmic logic but also imposes a structured, modular approach to problem-solving that is inherently transparent and interpretable.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: graph reasoning, pseudocode, llms
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: english
Submission Number: 4572
Loading