# LangChain Agent Roadmap
This roadmap outlines the steps to support LangChain Agent in separate PRs:
1. **Support LCEL Languages**
2. **Support LangChain Agent**
3. **Support LangGraph**
4. **Support Additional Features**
- Example: OpenAI Embeddings (not a Runnable Object)
## Important Concept
How Airflow Agent compile task in flytekit?
```python=
from airflow.sensors.filesystem import FileSensor
sensor = FileSensor(task_id="id", filepath="/tmp/1234")
```
In flyte, we will monkey patch the `FileSensor`'s base class `BaseOperator`, and replace to `a Dataclass Metadata`

## Support LCEL Languages
It's done. (But only support object callable)
The compile logic is the same as Airflow Agent, I monkey patch a class called `Runnable`
```python=
model = ChatOpenAI(model="gpt-3.5-turbo",
openai_api_key=api_key,
openai_organization="org-NayNG68kGnVXMJ8Ak4PMgQv7",)
prompt = PromptTemplate(
input_variables=[
"question",
],
template="Question: {question}?",
)
output_parser = StrOutputParser()
@workflow
def wf(input: str) -> Union[str, Any]:
message = prompt(input=input)
o0 = model(input=message)
o1 = output_parser(input=o0)
return o1
```
PR link: https://github.com/flyteorg/flytekit/pull/2191
Discuss Doc: https://hackmd.io/T_5pXuVYTkaSDx2Ol7MTuA?view
LangChain Official Doc: https://python.langchain.com/v0.1/docs/expression_language/
## Support LangChain Agent Executor
We can't use airflow agent's compile method to do this.
Since we make LangChain `Runnable` objects to a dataclass metadata,
if we use functions from LangChain package, which needs LangChain `Runnable` objects, we will have error.
Let me show you a langchain agent executor example.
```python=
prompt = hub.pull("hwchase17/react")
def echo_text(input_text: str) -> str:
"""Simple tool that echoes the input text."""
return f"You said: {input_text}"
echo_tool = Tool(
name="echo",
description="A tool that echoes the input text.",
func=echo_text
)
chat = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.2, api_key=os.getenv('OPENAI_API_KEY'))
tools = [echo_tool]
agent = create_react_agent(chat, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
response = agent_executor.invoke({"input": "hi"})
```

short and concise tutorial: https://youtu.be/Xi9Ui-9qcPw?si=gY0nF3zjZnCH3AVt
## Support LangGraph
LangGraph needs to support both LCEL languages and LangAgents, so we need to support it last.
Concepts: https://langchain-ai.github.io/langgraph/concepts/#data-flow-of-a-single-execution-of-a-stategraph
## Support Additional Features
OpenAI Embeddings's base class in LangChain is not `Runnable`, we need to support it if necessary.
https://python.langchain.com/v0.1/docs/integrations/text_embedding/openai/#embed-query
## Potential problems
1. how many 3rd parties module do we want to support on the agent pod?
It is not reasonable to install all 3rd party intergration.
https://python.langchain.com/v0.1/docs/integrations/platforms/
However, users have to study how to build their own custom image and delpoy to flyteagent, which is a little bit not newbie-friendly.
2. We use Union[str, Any] to support most of LangChain's class as input and output, if we want to use LangChain Agent on Flyte, when input is not `Any`, it is really not friendly for users to input the variables they want.