# The MCP Workshop: Turning APIs into AI-Ready Tools > 📌 Slido 📌 > https://app.sli.do/event/fpTwqeYSmDzmRExJqe6yxp > 共筆請從這裡開始 [toc] ## What LLMs can and cannot do LLM can: - Generate text - Write code - Author poems - The assumption is that you use can LLM through an API/SDK https://trygcp.dev/claim/df25-tp-nak https://github.com/tpiros/devfest-2025 ## Function/Tool Calling - Augment Knowledge by pulling information from external sources like databasees, APIs, and knowledge bases - Extend capabilities using external tools for tasks - Take action through APIs to interact with external systems [The Importance of Precise Function Declarations for LLMs](https://tpiros.dev/blog/precise-function-declaration-llm/) ## Model Context Protocol - Open Standard for connecting AI models to tools - MCP Server - Exposes tools (+resources, +prompts) - MCP Client - Discovers tools (+resources, +prompts) - e.g. Google CLI, cursor, Winsurf 等工具 ### MCP 範例 - Chrome DevTools - Playwright - Stripe - Cloudinary - And more: https://github.com/mcp ## Demo demo link : https://github.com/tpiros/devfest-2025 https://tpiros.dev/blog/precise-function-declaration-llm/ ``` npm run demo-1 ``` --- ## What if we'd like to have 10+ tools? Some issues arise: - Degining functions (and function schemas) for OpenAI, Anthropic and Google ### Example MCP servers * Chrome DevTools * Playweight * stripe * Cloudinary * and more (https://github.com/mcp) * tip: source, give me the reason > But please remember: no AI tool is perfect ### Let's create an MCP server! [MCP Inspector](https://www.npmjs.com/package/@modelcontextprotocol/inspector) ### Context Rot - Context "pollution" by MCP servers need to be accounted for - Single MCP server with a large number of tools - Increases tokens - token (and cost) efficiency is hampered - Too many tokens could lead to "context rot". ### Solution - Use code execution instead of direct tool calls. Treat MCP tools as code APIs the agent can import and call ### WebMCP https://youtu.be/p1l8nkQAoUw?si=BTlFWXeGyT_IF7RT