R2-KG: General-Purpose Dual-Agent Framework for Reliable Reasoning on Knowledge Graphs

ACL ARR 2025 July Submission833 Authors

28 Jul 2025 (modified: 01 Sept 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent studies have combined Large Language Models (LLMs) with Knowledge Graphs (KGs) to enhance reasoning, improving inference accuracy without additional training while mitigating hallucination. However, existing frameworks still suffer two practical drawbacks: they must be re-tuned whenever the KG or reasoning task changes, and they depend on a single, high-capacity LLM for reliable ($\textit{i.e.,} trustworthy$) reasoning. To address this, we introduce $\textit{R2-KG}$, a plug-and-play, dual-agent framework that separates reasoning into two roles: an $Operator$ (a low-capacity LLM) that gathers evidence and a $Supervisor$ (a high-capacity LLM) that makes final judgments. This design is cost-efficient for LLM inference while still maintaining strong reasoning accuracy. Additionally, R2-KG employs an $\textit{Abstention mechanism}$, generating answers only when sufficient evidence is collected from KG, which significantly enhances reliability. Experiments across five diverse benchmarks show that R2-KG consistently outperforms baselines in both accuracy and reliability, regardless of the inherent capability of LLMs used as the operator. Further experiments reveal that the single-agent version of R2-KG, equipped with a strict self-consistency strategy, achieves significantly higher-than-baseline reliability with reduced inference cost but increased abstention rate in complex KGs. Our findings establish R2-KG as a flexible and cost-effective solution for KG-based reasoning, reducing reliance on high-capacity LLMs while ensuring trustworthy inference.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Knowledge Graph, Reasoning, Agent, Large Language Model
Contribution Types: NLP engineering experiment
Languages Studied: English
Previous URL: https://openreview.net/forum?id=8HCVKhlIvd
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: Yes, I want a different area chair for our submission
Reassignment Request Reviewers: Yes, I want a different set of reviewers
Justification For Not Keeping Action Editor Or Reviewers: Although the three reviewers awarded our submission final scores of 3.5, 3.0, and 2.5, the Area chair irresponsibly declined to consider our rebuttal and instead aggregated only the weaknesses cited by the reviewers to justify assigning a 2.5. We respectfully request that the Area chair to be replaced. The reason for replacing the reviewers is to receive a fresh perspective on our work.
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: section 5
B2 Discuss The License For Artifacts: N/A
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: section 5
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: N/A
B6 Statistics For Data: Yes
B6 Elaboration: Table 1
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: section 5
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: section 5
C3 Descriptive Statistics: Yes
C3 Elaboration: section 6,7
C4 Parameters For Packages: N/A
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D1 Elaboration: No, because our research does not involve any human subjects or annotators.
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: No
E1 Elaboration: Only used for Grammar check, Paraphrase
Author Submission Checklist: yes
Submission Number: 833
Loading