<style> img { display: block; margin-left: auto; margin-right: auto; } </style> > [Paper link](https://arxiv.org/abs/2305.09645) | [Note link](https://hub.baai.ac.cn/view/26973) | [Code link](https://github.com/RUCAIBox/StructGPT) | EMNLP 2023 :::success **Thoughts** They proposed a general framework for improving the zero-shot reasoning ability of LLMs over structured data. ::: ## Abstract In this paper, they aim to improve the reasoning ability of large language models (LLMs) over structured data in a unified way. Inspired by the studies on tool augmentation for LLMs, they develop an Iterative Reading-then-Reasoning (IRR) framework to solve question answering tasks based on structured data, called **StructGPT**. ## Background Although large language models (LLMs) have made remarkable advancements in the NLP field. Recent work has also revealed that LLMs may generate unfaithful information in conflict with the factual knowledge, and also fall short of mastering domain-specific or real-time knowledge. ## Method This study is inspired by the tool manipulation strategy for augmenting the abilities of LLMs. They incorporate specialized interfaces to manipulate the structured data records. In this way, LLMs can concentrate on reasoning based on the evidence obtained from the interfaces. ![image](https://hackmd.io/_uploads/Syf-wDojA.png) This work mainly focuses on using LLMs to solve complex reasoning tasks based on structured data. ## Experiment They conduct experiments on three complex reasoning tasks over structured data. 1. KGQA 2. TableQA 3. DB based text-to-SQL Below figure is the results of different methods for KGQA. ![image](https://hackmd.io/_uploads/HkVGPDjsA.png) Below figure is the results of different methods for TableQA. ![image](https://hackmd.io/_uploads/HJ77PPjiR.png) Below figure is the results of different methods for Text-to-SQL. ![image](https://hackmd.io/_uploads/rk0rvvsjA.png)