DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs
TL;DR: We propose a Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs
Abstract: Answering Questions over Knowledge Graphs (KGQA) is key to well-functioning autonomous language agents in various real-life applications.
To improve the neural-symbolic reasoning capabilities of language agents powered by Large Language Models (LLMs) in KGQA, we propose the $\textbf{D}$ecomposition-$\textbf{A}$lignment-$\textbf{R}$easoning $\textbf{A}$gent ($\texttt{DARA}$) framework. $\texttt{DARA}$ effectively parses questions into formal queries through a dual mechanism: high-level iterative task decomposition and low-level grounding coupled with logical form construction. Importantly, $\texttt{DARA}$ can be efficiently trained with a small number of high-quality reasoning trajectories.
Our experimental results demonstrate fine-tuning $\texttt{DARA}$ on small LLMs (e.g. Llama-2 7B) is not only cost-effective but also yields better performance compared to in-context learning-based agents utilizing the most powerful LLMs available to date, like Llama-2-chat (70B) and GPT-4, across different benchmarks.
In addition, $\texttt{DARA}$ attains performance comparable to state-of-the-art enumerating-and-ranking-based methods.
Paper Type: long
Research Area: Question Answering
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
0 Replies
Loading