# Semantic Chunking In The Cloud With Chonkie RAG is becoming ubiquitous, but the dark side of RAG is that behind every single good RAG pipeline is an underlayer of chunking, embedding, and reranking. Often, each of these becomes a separate microservice to deploy. ## The Frontend Developer's Dilemma Frontend developers often want to process documents without having to deploy and scale a dedicated chunking microservice. There are a few tools for that, but chunking libraries in the JavaScript ecosystem aren't very sophisticated and don't support semantic chunking. Even managing chunking services in the Python world is far from trivial. Especially when you start using embedding models for semantic chunking, things become quite complex. We should be able to deploy a top tier RAG pipeline without DevOps expertise or managing infrastructure! ## Existing Solutions Jina AI has a set of excellent microservices you can use for free (in a limited capacity) that can chunk, scrape, or even search. But their library focuses on simple Regex chunking and doesn't have advanced semantic chunking features, which is now state-of-the-art. ## Enter Chonkie That's why I was so excited when my favorite chunking library, Chonkie, released a cloud offering. I was the first open source contributor to Chonkie because I was looking to build something exactly like it—but when I discovered it, I found it already solved many of my problems around advanced chunking. When it was released, it quickly racked up thousands of stars, and for good reason--it's much faster, more straightforward, and robust than bloated libraries like Llamaindex and Langchain. # Advanced Chunking Techniques in Chonkie Here are some advanced chunking techniques that Chonkie supports: ## Semantic Chunking ![image](https://hackmd.io/_uploads/r1bLMKIJxg.png) Semantic chunking is an extremely performant chunking method that looks at changes in semantic similarity in sentences when segmenting text. This means sections will have sentences relevant to each other. Often, this is about the closest thing you can get to human or LLM quality segmentation. There are different algorithms that can be used to decide exactly how the boundaries are determined using semantic vectors of individual sentences, and Chonkie gives you full control of hyperparameters to play around with. In practice, this is best for essays, unstructured text, blogs, and articles. ## Recursive Chunking ![image](https://hackmd.io/_uploads/SJaEQYU1ex.png) This takes into account the hierarchical structure of documents. This is best when your document has clear boundaries—if you can split by paragraph, for example. It's also excellent for PDF parsing in many cases. Often PDFs already have a lot of structure, so we don't have to use a complex chunking method and can just use the layout of the document to chunk. Since Chonkie supports many file formats including PDFs, you can potentially use it instead of Unstructured and other PDF parsing services. ## Late Chunking ![image](https://hackmd.io/_uploads/HkhDNFLJle.png) Late chunking is something I haven't seen outside of Chonkie, and it's a very clever semantic segmentation strategy. Semantic chunking works by taking embeddings of each sentence and comparing similarities between consecutive sentences to chunk. However, embedding individual sentences can create embeddings that don't have the context of the entire document. Late chunking is an optimization that first processes the entire document/page, then creates embeddings for individual chunks based on individual token embeddings. This is just a semantic chunking optimization, but if you are interested in learning more check out my article on this strategy: https://medium.com/towards-artificial-intelligence/easy-late-chunking-with-chonkie-7f05e5916997 # Practical Implementation Now let's look at an example architecture that processes a PDF with Chonkie: ![image](https://hackmd.io/_uploads/rJwI8K8kxg.png) Notice how we don't use Python at all, and don't have to deploy a dedicated microservice? Every step of our RAG pipeline is deployed for us by a separate provider, allowing us to move extremely fast while producing extremely high quality chunks and embeddings. # Let's look at some example code. We'll be making a simple chunk preview web app with Next.js and Shadcn/UI. ![image](https://hackmd.io/_uploads/SkbZ9KUJll.png) Here's the code for the frontend. To get started, we'll need to get an API key from Chonkie at chonkie.ai. Again, this can be done with the Chonkie library as well, which also has useful utilities for visualization and data processing! The most important for me is the visualization utility: ![image](https://hackmd.io/_uploads/HynO5YIyex.png) ## Example Implementation Let's look at how to implement a simple chunk visualizer app using Next.js and Shadcn/UI. First, we'll need to set up our server action to call the Chonkie API: ```typescript // app/actions.ts "use server" interface ChunkTextParams { text: string embeddingModel: string chunkSize: string recipe: string lang: string minCharsPerChunk: string } export async function chunkText(params: ChunkTextParams) { try { const form = new FormData() form.append("text", params.text) form.append("embedding_model", params.embeddingModel) form.append("chunk_size", params.chunkSize) form.append("recipe", params.recipe) form.append("lang", params.lang) form.append("min_characters_per_chunk", params.minCharsPerChunk) // Using node-fetch in server components const response = await fetch("https://api.chonkie.ai/v1/chunk/late", { method: "POST", headers: { Authorization: `Bearer YOUR_API_KEY_HERE`, }, // Let fetch set the correct Content-Type with boundary body: form, }) if (!response.ok) { const errorText = await response.text() console.error("API error:", response.status, errorText) return { error: `API error: ${response.status} - ${errorText || "Unknown error"}`, } } const data = await response.json() if (Array.isArray(data)) { // The API returns an array of chunk objects return { chunks: data.map((chunk) => chunk.text), } } else { console.error("Unexpected response format:", data) return { error: "Unexpected response format from API" } } } catch (error) { console.error("Server action error:", error) return { error: error instanceof Error ? error.message : "Unknown error occurred", } } } ``` Next, let's create our main page component that will allow users to input text and see how it gets chunked: ```typescript // app/page.tsx "use client" import { useState } from "react" import { Button } from "@/components/ui/button" import { Textarea } from "@/components/ui/textarea" import { Card, CardContent } from "@/components/ui/card" import { Slider } from "@/components/ui/slider" import { Label } from "@/components/ui/label" import { Input } from "@/components/ui/input" import { ChevronRight, Loader2 } from "lucide-react" import { chunkText } from "@/app/actions" interface Chunk { text: string index: number } export default function ChunkVisualizer() { const [text, setText] = useState("") const [chunks, setChunks] = useState<Chunk[]>([]) const [loading, setLoading] = useState(false) const [error, setError] = useState<string | null>(null) const [chunkSize, setChunkSize] = useState(512) const [minChars, setMinChars] = useState(24) const [embeddingModel, setEmbeddingModel] = useState("sentence-transformers/all-minilm-l6-v2") const [recipe, setRecipe] = useState("default") const [lang, setLang] = useState("en") const handleChunk = async () => { if (!text.trim()) return setLoading(true) setError(null) try { const result = await chunkText({ text, embeddingModel, chunkSize: chunkSize.toString(), recipe, lang, minCharsPerChunk: minChars.toString(), }) if (result.error) { setError(result.error) setChunks([]) } else if (result.chunks) { setChunks( result.chunks.map((chunk: string, index: number) => ({ text: chunk, index, })), ) } } catch (err) { console.error("Error in client:", err) setError("Failed to process request. Please try again.") setChunks([]) } finally { setLoading(false) } } return ( <div className="container mx-auto py-8 max-w-4xl"> <h1 className="text-3xl font-bold mb-6">Text Chunk Visualizer</h1> {/* Parameters UI */} <div className="grid grid-cols-1 md:grid-cols-2 gap-6 mb-6"> <div> <Label htmlFor="chunkSize">Chunk Size: {chunkSize}</Label> <Slider id="chunkSize" value={[chunkSize]} min={100} max={1000} step={1} onValueChange={(value) => setChunkSize(value[0])} className="my-2" /> </div> <div> <Label htmlFor="minChars">Min Characters Per Chunk: {minChars}</Label> <Slider id="minChars" value={[minChars]} min={10} max={100} step={1} onValueChange={(value) => setMinChars(value[0])} className="my-2" /> </div> </div> {/* More parameters */} <div className="grid grid-cols-1 md:grid-cols-3 gap-6 mb-6"> <div> <Label htmlFor="embeddingModel">Embedding Model</Label> <Input id="embeddingModel" value={embeddingModel} onChange={(e) => setEmbeddingModel(e.target.value)} className="mt-1" /> </div> <div> <Label htmlFor="recipe">Recipe</Label> <Input id="recipe" value={recipe} onChange={(e) => setRecipe(e.target.value)} className="mt-1" /> </div> <div> <Label htmlFor="lang">Language</Label> <Input id="lang" value={lang} onChange={(e) => setLang(e.target.value)} className="mt-1" /> </div> </div> {/* Text input */} <div className="mb-6"> <Label htmlFor="text">Text to Chunk</Label> <Textarea id="text" value={text} onChange={(e) => setText(e.target.value)} placeholder="Enter text to chunk..." className="h-40 mt-1" /> </div> {/* Action button */} <Button onClick={handleChunk} disabled={loading || !text.trim()} className="mb-8"> {loading ? ( <> <Loader2 className="mr-2 h-4 w-4 animate-spin" /> Processing... </> ) : ( <> Chunk Text <ChevronRight className="ml-2 h-4 w-4" /> </> )} </Button> {/* Error display */} {error && ( <div className="bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded mb-6"> <p className="font-medium">Error:</p> <p>{error}</p> </div> )} {/* Results display */} {chunks.length > 0 && ( <div> <h2 className="text-2xl font-semibold mb-4">Chunks ({chunks.length})</h2> <div className="space-y-4"> {chunks.map((chunk, index) => ( <Card key={index} className="overflow-hidden"> <CardContent className="p-4"> <div className="flex items-center justify-between mb-2"> <span className="text-sm font-medium text-gray-500">Chunk #{index + 1}</span> <span className="text-xs bg-gray-100 px-2 py-1 rounded-full">{chunk.text.length} characters</span> </div> <p className="text-sm whitespace-pre-wrap">{chunk.text}</p> </CardContent> </Card> ))} </div> </div> )} </div> ) } ``` With this implementation, users can input text and adjust chunking parameters like chunk size, minimum characters per chunk, embedding model, and recipe. The application then sends the text to the Chonkie API, which processes it using the specified parameters and returns the chunked text. The result is displayed in a visually pleasing card-based interface. ## Configuring Chunking Parameters The Chonkie API offers several parameters to customize your chunking strategy: 1. **Embedding Model**: Choose from various embedding models like `sentence-transformers/all-minilm-l6-v2` (default), which affects the semantic understanding of your text. 2. **Chunk Size**: Controls the target size of each chunk in characters. 3. **Min Characters Per Chunk**: Sets a minimum threshold for chunk size to avoid tiny, meaningless chunks. 4. **Recipe**: Specifies the chunking algorithm to use: - `default`: Standard semantic chunking - `recursive`: Hierarchical chunking that respects document structure - `late`: Advanced semantic chunking with improved context awareness 5. **Language**: Optimizes chunking for specific languages (default is English). ## Beyond Chunking: Building a Complete RAG Pipeline This example shows how easy it is to create a chunking interface for your RAG pipeline. For a complete solution, you would: 1. Store the chunks in a vector database like KDB.AI, who's cloud offering has an excellent free tier! 2. Create embeddings for each chunk using an embedding provider like OpenAI/Cohere (or a dedicated optimized deployment/microservice if you want to squeeze out all possible performance) 3. Build a retrieval mechanism that finds the most relevant chunks for a given query 4. Pass these chunks to your LLM along with your prompt The beauty of using a service like Chonkie is that you don't need to maintain a separate microservice for chunking - it's all handled for you. ## Conclusion Building a RAG system is something every AI engineer will go through, yet it's not currently a simple task to create a high quality pipeline. The quality of chunks is often ignored, resulting in low quality retrievals. An well thought out open-source tool like Chonkie is a great way to make sure you are making good chunking decisions without having to be a chunking expert. For more information on Chonkie and to get your API key, visit [chonkie.ai](https://chonkie.ai).