<style> img { display: block; margin-left: auto; margin-right: auto; } </style> > [Paper link](https://arxiv.org/abs/2310.01061) | [Note link](https://blog.csdn.net/weixin_44466434/article/details/137135162) | [Code link](https://github.com/RManLuo/reasoning-on-graphs) | ICLR 2024 :::success **Thoughts** This paper purposes reasoning on graphs (RoG). Synergizing LLMs with KGs to conduct faithful and interpretable reasoning. ::: ## Abstract Knowledge graphs (KGs), which capture vast amounts of facts in a structured format, offer a reliable source of knowledge for reasoning. Nevertheless, existing KG-based LLM reasoning methods only treat KGs as factual knowledge bases and overlook the importance of their structural information for reasoning. In this paper, they propose a novel method called reasoning on graphs (RoG) that synergizes LLMs with KGs to enable faithful and interpretable reasoning. ## Background To tackle the issues that LLMs are still limited by the lack of knowledge and prone to hallucinations. Knowledge graphs (KGs) have been incorporated to improve the reasoning ability of LLMs. Below figure shows that how those issues can be addressed by triples and relation paths from KGs. ![image](https://hackmd.io/_uploads/r1RtMp_oC.png) ## Method This paper purposes reasoning on graphs (RoG), which contains two components: 1. A planning module that generates relation paths grounded by KGs as faithful plans 2. A retrieval-reasoning module that first retrieves valid reasoning paths from KGs according to the plans, then conducts faithful reasoning based on retrieved reasoning paths and generates answers with interpretable explanations. ![image](https://hackmd.io/_uploads/S1dsGpdsC.png) How RoG works ? 1. Given a question, they first prompt LLMs to generate several relation paths that are grounded by KGs as plans 2. And then, they retrieve reasoning paths from KGs using the plans. 3. Finally, they conduct faithful reasoning based on the retrieved reasoning paths and generate answers with interpretable explanations. ## Experiment ### Dataset This study evaluates the reasoning ability of RoG on two benchmark KGQA datasets: - WebQuestionSP (WebQSP) - Complex WebQuestions (CWQ) ### Evaluation Metrics - Hits@1 - F1 They use LLaMA2-Chat-7B as the LLM backbone. They compare RoG with 21 baselines grouping into 5 categories ![image](https://hackmd.io/_uploads/rJ4aM6ujA.png) Below's table shows how lack of knowledge can be handled with RoG. ![image](https://hackmd.io/_uploads/SyO5K6_oR.png) Below's table shows how hallucination can be handled with RoG. ![image](https://hackmd.io/_uploads/H190FaOi0.png)