# Building a Super Tiny HackMD Agent with Just 100+ LOC
[中文](/@EastSun5566/building-a-tiny-hackmd-agent-zh)
**AI Agent** is probably the most frequently heard term lately. After reading various introductions, it still feels vague - what the heck is it that can understand, make decisions, and autonomously complete tasks? It seems complex, as if there's some mysterious black magic behind it.
Actually, not really.
While AI at its core is a black box, an Agent is actually more straightforward than you might imagine. It's simply ==a powerful LLM plus a repetitive loop and a set of tools for interacting with the external world==. You can think of tools directly as **function / API calls**, and the loop maintains the conversation, allowing the LLM to understand user intent and decide which tool to call to achieve the goal. So I can define it very simply and brutally:
:::info
AI Agent = LLM + Loop + Tools
:::
Yes, that's really it! Let's implement a mini Agent for managing HackMD notes with just **100+** lines of code. ~~No frameworks, MCP, or A2A and other overly complex stuff~~, completely from scratch.
You can also directly check out the completed repo:
{%preview https://github.com/EastSun5566/tiny-hackmd-agent %}
## Preparation
We'll use:
- [Deno](https://deno.com/) v2+ (TypeScript)
- [HackMD API](https://hackmd.io/settings#api) for managing notes
- [Anthropic API](https://console.anthropic.com/account/keys) as the LLM
Let's initialize the project:
```bash
deno init tiny-hackmd-agent && cd tiny-hackmd-agent
```
Install dependencies:
```bash
deno add npm:@anthropic-ai/sdk npm:@hackmd/api
```
## Agent Main Body
First is the main Agent function, which takes two parameters `ai` and `tools`, with a brutal `while (true)` loop inside:
```ts
function runAgent(ai: Anthropic, tools: Tool[]) {
while (true) {
// ...
}
}
```
`ai` doesn't need special explanation - it's just the Anthropic API client:
```ts
const ai = new Anthropic({ apiKey: '<YOUR_KEY>' });
```
## Tool Definition
Next, the more interesting part is `tools`. `tools` is a list of tools we define ourselves, used to tell the LLM what tools are available:
```ts
interface Tool {
name: string;
description: string;
input_schema: ...;
call(input: ...): Promise<string>;
}
```
- `name`: Tool name
- `description`: Tool description, telling the LLM when to use this tool
- `input_schema`: [JSON Schema](https://json-schema.org/overview/what-is-jsonschema#what-is-json-schema) to tell the LLM the "shape" of parameters needed when using this tool
- `call`: ==The actual execution logic when calling the tool==
Since we're managing notes here, `call` naturally involves calling the HackMD API, such as reading a single note:
```ts
const api = new API('<YOUR_TOKEN>');
const tools: Tool[] = [
{
name: "read_note",
description: "Read a note content by ID",
input_schema: {
type: "object",
properties: {
noteId: {
type: "string",
description: "The ID of the note",
},
},
required: ["noteId"],
},
call({ noteId }) {
return api.getNote(noteId);
},
]
```
With some wrapping, I won't list all the details here, but you can understand there would be tools like `list_notes`, `read_note`, `create_note`, `update_note`, `delete_note`, etc.:
```ts
function createTools(apiToken: string): Tool[] {
const api = new API(apiToken);
return [
{
name: "list_notes",
// ...
},
{
name: "read_note",
// ...
},
{
name: "create_note",
// ...
},
// ...
]
}
```
## Conversation Loop
OK, now back to the original while loop to implement reading user input. We need a `conversation` array to record the complete conversation and pass `tools` and the conversation to the LLM:
```ts
const conversation = [];
let shouldReadInput = true
while (true) {
if (shouldReadInput) {
// Read user input
const input = prompt("😂: ");
if (!input) break;
conversation.push({ role: "user", content: input });
}
// Chat with LLM
const message = await ai.messages.create({
model: "claude-3-5-haiku-latest",
messages: conversation, // Complete conversation history
tools, // The tools we just defined
system: "You are a helpful agent for managing HackMD notes.",
});
conversation.push({ role: "assistant", content: message.content });
// Handle response
const toolResults = [];
for (const content of message.content) {
// Handle text responses and tool usage
}
}
```
Next, the AI's response `message.content` will be an array. For each item in the array, we only care about two types: text response `text` or tool usage `tool_use`. ==If it's plain text, we display it directly; if it wants to use a tool, we need to manually execute the tool==:
```ts
const toolResults = []
for (const content of message.content) {
if (content.type === "text") {
console.log(`🤖: ${content.text}`);
}
if (content.type === "tool_use") {
const tool = tools.find(({ name }) => name === content.name);
console.log(`🔧 Using: ${content.name}...`);
// Execute tool
const result = await tool.call(content.input)
toolResults.push({
type: "tool_result",
tool_use_id: content.id,
content: result,
});
}
}
const hasToolResults = toolResults.length > 0;
if (hasToolResults) {
conversation.push({ role: "user", content: toolResults });
}
shouldReadInput = !hasToolResults
```
After executing the tool, add the result to the conversation to let the LLM know the execution result, then return to the beginning of the loop and provide a response.
## Complete Code
That's right! Just like that, we've completed the full Agent functionality. Finally, add the entry point:
```ts
async function main() {
const ai = new Anthropic({ apiKey: '<YOUR_KEY>' });
const tools = createTools('<YOUR_TOKEN>');
await runAgent(ai, tools);
}
main()
```
Run the Agent to see the results:
```bash
deno run --allow-net --allow-env main.ts
```
:::spoiler Screenshots


:::
The above demonstrates the **Agent's** ability to use list and create note tools based on user input. Much simpler than imagined, right! Complete repo:
{%preview https://github.com/EastSun5566/tiny-hackmd-agent %}
## Summary
Through this implementation, we can see that the core of an AI Agent is actually quite intuitive:
- **LLM** handles understanding and decision-making
- **Loop** maintains continuous conversation
- **Tools** provide the ability to interact with the external world
That's it - you can also write your own Agent :smiley:
---
#### Credits
- [A practical guide to building agents](https://cdn.openai.com/business-guides-and-resources/a-practical-guide-to-building-agents.pdf)
- [How to Build an Agent](https://ampcode.com/how-to-build-an-agent)
- [Building effective agents](https://www.anthropic.com/engineering/building-effective-agents)
{%hackmd @EastSun5566/license %}