owned this note
owned this note
Published
Linked with GitHub
# context-engineering
Here’s a cleaned-up version of the LangChain blog post, formatted in Markdown for easy insertion into your `.md` file:
---
## The Rise of "Context Engineering"
*Published: Jun 23, 2025*
*Header image from Dex Horthy on Twitter*
---
### What Is Context Engineering?
**Context engineering** is the practice of building dynamic systems that provide the *right information* and *tools* in the *right format* so that a large language model (LLM) can plausibly accomplish a task.
> Most agent failures stem from missing or misformatted context, not from model inadequacy.
As LLM applications evolve from single prompts into dynamic, agentic systems, context engineering is becoming the most crucial skill an AI engineer can develop.
---
### Breaking It Down
#### Context Engineering Is a System
LLMs rely on context from many sources: developer input, user interaction, prior history, tool outputs, and external data. Gathering and integrating this into a coherent system is non-trivial.
#### It Must Be Dynamic
Since many sources of context are dynamic, the system itself must be able to construct prompts on-the-fly—not just rely on a single static prompt.
#### You Need the Right Information
If your agent fails, check the context. LLMs can’t infer what isn’t there. Garbage in, garbage out.
#### You Need the Right Tools
Context isn't just textual—LLMs may need tools for lookup, actions, or computation. Equipping the LLM with the right tools is as important as giving it the right data.
#### Format Matters
Like humans, LLMs are sensitive to how information is presented. A concise message is more effective than a bloated JSON blob. Tool input parameters must also be clearly structured.
#### Can It Plausibly Accomplish the Task?
Ask this regularly. If an LLM fails, was it given enough to succeed? If yes, it’s a model issue. If no, it’s a context issue. The fix differs depending on the failure mode.
---
### Why Is Context Engineering Important?
When agentic systems break down, it’s usually due to one of two causes:
1. **The model isn’t capable enough.**
2. **The model wasn’t given proper context.**
As LLMs improve, the second cause dominates. Common context issues include:
* **Missing data** – The model can't guess what it hasn’t been told.
* **Poor formatting** – Presentation significantly impacts understanding and output quality.
---
### Context vs. Prompt Engineering
Prompt engineering involves clever phrasing. Context engineering involves **structured systems** that dynamically build those prompts with the right inputs.
* Prompt engineering is a **subset** of context engineering.
* Instructions for how an agent should behave are a part of both.
> A well-designed system doesn’t just phrase things well—it makes sure the model *knows* what it needs to know, *when* it needs to know it.
---
### Examples of Context Engineering
* **Tool Use:** Provide tools with clear interfaces and digestible outputs.
* **Short-Term Memory:** Summarize long conversations and reinsert the summary.
* **Long-Term Memory:** Recall user preferences across sessions.
* **Prompt Engineering:** Clearly define how agents should behave.
* **Retrieval-Augmented Generation (RAG):** Dynamically fetch and insert relevant information.
---
### LangGraph: Built for Context Engineering
**LangGraph** was designed for control. You choose:
* What steps run
* What goes into your LLM
* Where outputs are stored
Unlike black-box agent frameworks, LangGraph enables full control over context construction.
> See also Dex Horthy’s ["12 Factor Agents"](https://x.com/dexhorthy), which echoes these ideas ("own your prompts", "own your context building", etc.).
---
### LangSmith: Observe and Debug Context
**LangSmith** provides observability for LLM apps. Even before the term "context engineering" was coined, LangSmith’s tracing tools helped developers understand:
* What context was gathered
* How it was formatted
* Which tools were involved
You can inspect every input and output, making it easier to debug context-related failures.
---
### Communication Is All You Need
A previous blog post emphasized that communication is often the root cause of agent failure. That sentiment holds: **context engineering is effective communication with machines**.
It's not a new idea—it’s a new name for a rapidly solidifying practice.
---
> We'll be sharing more on this topic soon. LangGraph and LangSmith were built to empower context engineers. We're excited to see the field embrace this.
---
© LangChain Blog 2025
[Subscribe to updates](#)
---
Let me know if you'd like a version with additional formatting (e.g., callout blocks, image embeds, or side-by-side examples).