<style>
img {
display: block;
margin-left: auto;
margin-right: auto;
}
</style>
> [Paper link](https://arxiv.org/abs/2310.04560) | [Note link](https://geyuyao.com/post/talk-like-a-graph-encoding-graphs-for-large-language-models/) | [Code link](https://github.com/google-research/talk-like-a-graph) | ICLR 2024
:::success
**Thoughts**
In this work, they have presented the first comprehensive study of encoding graph-structured data as text for consumption by LLMs.
:::
## Abstract
This paper performs the first comprehensive study of encoding graph-structured data as text for consumption by LLMs.
They show that LLM performance on graph reasoning tasks varies on three fundamental levels:
1. Graph encoding method
2. Nature of the graph task itself
3. Very structure of the graph considered.
## Background
There are a number of limitations with the current methodology of design and implementation of LLMs.
One of the most obvious limitations is their **reliance on unstructured text**, causing the models to sometimes miss obvious logical entailments or hallucinate incorrect conclusions.
Another is that LLMs are fundamentally limited by when they were trained, and **it can be difficult to incorporate ‘fresh’ information** about the state of the world which has changed.
Graph-structured data is one of the most flexible ways to represent information and could be a promising solution to both challenges.
## Method
In this work, they perform the first comprehensive study about reasoning over graph-structured data as text for consumption by LLMs.
Below is the overview of their framework for reasoning with graphs using LLMs.

They propose a new set of benchmarks GraphQA for measuring LLMperformance reasoning over graph data.
Below is the overview of our framework for encoding graphs via text.

## Experiment
Below's experiment try to know how graph encoding via graph's performance.
### Varying Graph Encoding Functions

### Varying Prompt Questions

### Multiple Relation Encoding

### Model Capacity and Graph Reasoning Ability
