---
# System prepended metadata

title: what-is-mcp
tags: [mcp, integration, tools]

---

---
title: "What Is MCP?"
description: "How LLMs learned to talk to the rest of your software — a beginner's tour of the Model Context Protocol"
tags: ["background", "mcp", "integration"]
difficulty: "beginner"
time_estimate: "20 min"
---

## What You'll Learn

- What MCP (Model Context Protocol) is and why it exists
- The difference between a chatbot that *talks* and an agent that *does*
- What "tools," "resources," and "servers" mean in the MCP world
- How Claude Code, Airtable, Slack, GitHub, and your filesystem can all speak the same language

## Why This Exists

Before you tackle the [MCP for Airtable](mcp-airtable) or [MCP for Slack](mcp-slack) quests, it helps to know *why* MCP exists at all. Like most things in this unit, each layer was invented to solve a real problem.

The short version: **MCP is a shared language that lets LLMs talk to the rest of your software.** It's the thing that makes Claude Code capable of reading your Airtable, posting to your Slack, editing your files, and running your database queries — all without anyone writing custom code to connect each pair of systems.

## Phase 1: The Chatbot Era — LLMs in a Box

The first generation of LLM apps were chatbots. You typed, the model responded. That was it. The LLM lived in its own little bubble and had no idea what was happening in your world.

```
You:  What's on my calendar today?
LLM:  I don't have access to your calendar. You'll have to check it yourself.
```

The model could write beautifully about calendars, explain the history of the Gregorian system, and compose poems about scheduling — but it couldn't actually *see* your calendar. 

## Phase 2: The Copy-Paste Era — *You* Were the Integration

So you did the obvious thing: you became the integration layer — the human shuttling data back and forth between apps. You'd open your calendar, copy the text, paste it into ChatGPT, ask for a summary, copy the response, and paste it somewhere useful.

```
1. Open calendar → copy event text
2. Paste into LLM → "summarize my week"
3. Copy LLM response → paste into email
4. Repeat 40 times a day
```

This worked, sort of. But you were doing the lifting. The LLM never *touched* your actual data — it just saw whatever you happened to paste in, at whatever moment, in whatever format. Every task was a manual shuttle run between apps.

## Phase 3: Custom Plugins — One Integration at a Time

The next idea was obvious: let the LLM reach into apps directly. OpenAI launched "plugins." Each major tool — Slack, Notion, GitHub, Google Drive — could build a plugin that the LLM could call.

```
LLM (with Slack plugin): Let me check #general for you...
                         → calls Slack API
                         → gets the last 10 messages
                         → summarizes them for you
```

This was magical the first time you saw it. But there was a problem lurking underneath.

### But here's the catch

Every LLM had its own plugin format. OpenAI had plugins. Anthropic had tool use. Google had extensions. Every tool vendor — Slack, Notion, GitHub — had to build and maintain a *separate* integration for each LLM they wanted to support.

This is what engineers call the **N×M problem**: if you have N different LLMs and M different tools, you end up needing N times M separate integrations. That number gets overwhelming fast. Nobody wanted to maintain their corner of that matrix, and a lot of plugins got half-built and abandoned.

## Phase 4: MCP — A Shared Standard (late 2024)

Anthropic released **MCP (Model Context Protocol)** in November 2024 as an open standard, and the broader AI industry adopted it fast. The idea is simple and comes straight from how the rest of computing works:

> Instead of every LLM inventing its own way to talk to every tool, everyone agrees on **one protocol**. Tools implement it once. LLMs speak it once. They connect.

The analogy people use is **USB-C**. Before USB-C, every device had its own charger. Now one cable works for laptops, phones, headphones, monitors. MCP is trying to be that for the AI-to-tool connection.

```
Before MCP                     After MCP
──────────                     ─────────
Claude ←→ custom Slack code    Claude ─┐
Claude ←→ custom Notion code   GPT    ─┤
GPT    ←→ custom Slack code    Gemini ─┤ ─→ MCP ←─ Slack / Notion / Airtable /
GPT    ←→ custom Notion code   Cursor ─┘           GitHub / Filesystem / ...
(every pair needs glue)        (everyone speaks the same protocol)
```

## Phase 5: How MCP Actually Works

MCP has two sides: **clients** and **servers**. This is the same client/server idea you saw in the web history — it just means "the thing asking" and "the thing answering."

- **MCP client** — the LLM-powered app. Claude Code is an MCP client. So is Claude Desktop, Cursor, and a growing list of others.
- **MCP server** — a small program that speaks MCP and exposes some capability: reading Airtable bases, posting Slack messages, querying a Postgres database, running shell commands, searching a Notion workspace.

When you "install an MCP server" in Claude Code, you're telling Claude Code: *"here's another thing you can talk to."* Claude Code connects, asks the server what it can do, and adds those capabilities to its toolkit.

### What MCP Servers Expose

MCP servers offer three main things:

- **Tools** — actions the LLM can take that actually *change* something. `send_slack_message`, `create_airtable_record`, `run_sql_query`.
- **Resources** — data the LLM can read but not change. A file, a database row, a Notion page. Each one has a unique address (like a URL) so Claude Code knows where to fetch it.
- **Prompts** — pre-written prompt templates the server offers, ready to use by name. (You'll see this one much less often.)

You don't have to think about this distinction most of the time. You just ask Claude to "check my Airtable" and it figures out which tool to call.

### A Concrete Example: What an MCP Actually Looks Like

So what *is* an MCP server, physically? It's a small program on your computer (or sometimes a remote service) that Claude Code launches and talks to in the background. You tell Claude Code which ones to use with a few lines in a config file — typically `~/.claude.json` on your Mac, or a project-level `.mcp.json` inside a repo.

A real-world config entry looks like this:

```json
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/you/projects/birds"
      ]
    },
    "airtable": {
      "command": "npx",
      "args": ["-y", "airtable-mcp-server"],
      "env": {
        "AIRTABLE_API_KEY": "pat_xxxxxxxxxxxx"
      }
    }
  }
}
```

Don't worry if the exact syntax looks intimidating — that's JSON, which you'll see more of throughout this unit. The shape is what matters. Each entry says: *"here's a server I want available — run this command to start it, and here's its name."* When Claude Code launches, it boots up each server, asks *"what can you do?"*, and hands those capabilities to the model.

The filesystem server above, once connected, gives Claude Code tools like:

```
read_file(path)         → reads a file from /Users/you/projects/birds/
write_file(path, text)  → writes a file to that folder
list_directory(path)    → lists what's in a folder
```

Notice the folder path in the config. That's the scope — the filesystem server literally cannot see anything outside `/Users/you/projects/birds`. This is how MCP servers stay safe: you tell them *exactly* where they're allowed to look.

In practice, you usually don't hand-edit this file. Claude Code has commands like `claude mcp add` that edit the config for you, and a registry of popular servers you can install with one line. But underneath, it's all just this.

## Phase 6: What This Unlocks — Agents, Not Just Chatbots

Once Claude Code has MCP servers connected, it can do something new: **chain several tool calls together to finish a whole task in one shot**. You describe a goal in English, and the LLM figures out which MCPs to use, in what order, making decisions along the way.

Here's a concrete example. Say your bird encyclopedia lives in Airtable, and your poem collection is a folder of text files on your laptop. You have two MCP servers connected — one for each. You type into Claude Code:

```
You:  Look at the bird records in my Airtable and add a "favorite poem" field
      for any bird that's missing one — pick something appropriate from the
      poems folder.
```

Under the hood, Claude Code does this:

```
1. Reads the bird records           (via Airtable MCP)
2. Reads the poems in the folder    (via filesystem MCP)
3. Matches each bird to a suitable poem
4. Writes the new field back        (via Airtable MCP)
```

Four steps, across two different systems, all coordinated by the LLM. Before MCP, you'd have had to write a Python script (like the ones you built in Unit 1) to pull this off. Now it's a paragraph of English.

**That chaining — read, decide, act, repeat — is what people mean by "agent":** an LLM pursuing a goal across multiple tools, not just chatting. MCP is the plumbing that makes the "act" part possible.

And the same protocol keeps working when *you're* not the one at the keyboard — a background AI watching a Slack channel and filing bug reports is using MCP the same way Claude Code does when you ask it to edit a file.

## Scope and Authorization: Keeping MCP on a Leash

MCP gives Claude Code real power — it can read your files, edit your spreadsheets, post to your channels. That's the point. But "real power" means you want real guardrails. The good news: MCP is designed so *you* decide what each server is allowed to touch. Two words to know:

- **Scope** — how much the server can see. When you connect a filesystem server, you point it at one folder. It can't see the rest of your computer. When you connect an Airtable server, you give it an API key that only works on certain bases. Nothing outside the scope exists, as far as the server is concerned.
- **Authorization** — how the server proves it's allowed to act on your behalf. Most MCP servers need a credential of some kind: an API key, a personal access token, or an OAuth login. That credential is what Airtable (or Slack, or GitHub) checks before letting the server do anything. No credential → no access.

### Things to Be Mindful Of

- **Only give a server what it needs.** If a server asks for an API key, check what permissions that key has. An Airtable token that can read *and delete* is more dangerous than one that only reads. Most services let you make scoped tokens — use them.
- **Narrow the folder / base / workspace.** A filesystem server pointed at `/Users/you/projects/birds` is safe. One pointed at `/Users/you` can read your Desktop, Documents, and Downloads. Always scope to the smallest folder that still does the job.
- **Treat API keys like passwords.** They sit in your `~/.claude.json` or `.mcp.json` in plain text. Don't commit those files to GitHub. Don't paste them in Slack. If one leaks, revoke it in the service that issued it and make a new one.
- **Claude Code asks before acting.** When an MCP tool tries to do something, Claude Code shows you what it's about to do and waits for you to approve. Read the prompt before you click yes — especially for anything that writes, deletes, or sends.
- **Revoke what you no longer use.** If you installed an MCP server for a one-off experiment, remove it from your config afterwards, and revoke the API key in whichever service issued it. Stale credentials lying around are the #1 cause of "oops."
- **Personal vs. shared accounts.** If you're logged into a shared Slack workspace or a class Airtable, think carefully before connecting it. A Slack MCP server logged in as *you* can post as *you* — in any channel you have access to.

The core mental model: **MCP servers have exactly the powers you gave them — no more, no less.** If you scoped a server tightly and gave it a read-only key, there's a hard ceiling on what it can do, even if the LLM decides to go off the rails.

## Phase 7: Where MCP Fits in Your Workflow

Almost everything you'll do with MCP in this unit is **using servers that already exist**. The [MCP for Airtable](mcp-airtable) and [MCP for Slack](mcp-slack) quests walk you through it — grab a ready-made server, add a few lines to your config, and Claude Code can suddenly reach into those systems. There's already a big catalog: Notion, Google Drive, GitHub, Postgres, Linear, filesystems, browsers, and more.

A few things to keep in mind:

- **Claude Code has a registry** of servers you can browse and install with one command. Other MCP clients (Cursor, Claude Desktop) have their own, and the same server usually works across all of them.
- **Most servers are tiny to install.** Usually a few lines in the config file you saw above, pointing at a pre-built package someone else published. You install them, you don't build them.

### You Can Also Build Your Own

When the server you want doesn't exist yet, **writing one is often ~100 lines of code** — and Claude Code is great at helping you build it. You'd build your own when:

- **You have data nobody's packaged yet** — your class's shared spreadsheet, a collaborator's API, the `bird_encyclopedia.json` from Unit 1. Wrap it once and any MCP client can query it.
- **You want to share a capability.** A server you write can be installed by your classmates so *their* Claude Code talks to your data too. This is how MCP ecosystems grow.
- **You want to see how agents actually work.** Building a small server demystifies the whole thing — you write a function, the LLM calls it, you watch it happen.

### The Takeaway

| Era | What You Got | Key Innovation |
|-----|-------------|---------------|
| Chatbot era | LLMs that could talk but not act | Natural language in a box |
| Copy-paste era | You as the integration layer | Friction + manual labor |
| Custom plugins | LLMs reaching into specific apps | Useful, but N×M integration chaos |
| MCP | A shared protocol for every model × every tool | Install once, works everywhere |
| Agents | LLMs that chain tools together | "Do this whole workflow" becomes one sentence |

You don't need to master any of this to use MCP. But knowing that it sits at the end of this progression — and that each layer exists to fix a real pain from the previous one — will help you understand what's happening when you install an MCP server and suddenly Claude Code can "see" a new corner of your world.

## Key Concepts

| Term | What It Means |
|------|--------------|
| Agent | An LLM that can take actions in the world, not just produce text |
| Plugin | An older, model-specific way to give an LLM access to an outside service |
| N×M problem | When N tools each need custom integrations for M models — a maintenance nightmare |
| MCP client | The LLM-powered app that consumes MCP servers (e.g., Claude Code) |
| MCP server | A small program that exposes a capability (Airtable, Slack, files, etc.) over MCP |
| Tool | An action the LLM can invoke via MCP — has side effects in the world |
| Resource | A piece of data the LLM can read via MCP — identified by a URI |
| Prompt (MCP) | A pre-written prompt template that an MCP server offers the client |