# How to choose an LLM inference provider in 2025 I've been building with LLMs in production for over five years now, ever since gpt-3's beta release. The landscape has evolved from "just use OpenAI" to an abundant array of providers, each with their own trade‑offs. Here's what I've learned about picking the right one for your use case. ## The simple truth about LLM providers Every single provider has advantages and disadvantages. There's no universal "best" – just different tools optimized for different jobs. After years of building in this space, I've developed some heuristics that actually work. To choose the right provider **optimize for your constraint**. It might be latency, cost, reliability, or model quality. Pick one, maybe two. You can't have all four. The good news: vendor lock‑in is dead. Everyone supports OpenAI‑compatible APIs now, so switching providers is usually just changing a base URL. ## A tour of the providers ### OpenAI: The obvious starting point OpenAI remains the only provider for their GPT models, and they've made it remarkably easy to get started. Their free tier is generous – around \$500/month [if you're willing to let them use your data for training](https://platform.openai.com/settings/organization/data-controls/sharing) (which many prototypes can live with). The developer experience is unmatched. Good documentation, stable APIs, and every tutorial assumes you're using them. You're paying a premium, but switching away later is trivial thanks to API compatibility. ### Anthropic: Where the coders live Claude has become the model of choice for coding tasks. There's something about how these models were trained that makes them particularly effective at understanding and generating code. If you're not building coding features, you can usually find something cheaper elsewhere. Their Opus model is genuinely state‑of‑the‑art for complex reasoning tasks. When you need the absolute best performance on sophisticated problems, Claude Opus and OpenAI's latest are your only real options. Here are the [livebench]() benchmarks results showing Anthropic models at the top (if we exclude reasoning models). The dominance of Anthropic models for coding cannot be overstated, developers prefer these models not just for general coding ability but also frontend style and taste. ![image](https://hackmd.io/_uploads/rkGxahBIlg.png) ### Inference.net: Cheap as Dirt with Fine-tuning experts ![image](https://hackmd.io/_uploads/S1b1ipBLxl.png) Inference.net has a cleverer angle: they arbitrage compute prices, buying unused capacity when it's cheap. GPU farms and clouds don't always have compute, and when it's underutilized Inference swoops in and uses it for LLM inference. This allows them to offer dirt-cheap inference prices. The real advantage of Inference is for time-insensitive requests. If a request doesn't need an immediate response, Inference can wait until cheap compute is available, then return the response through a webhook or the most scalable batch API available. For bulk classification at internet scale, they're the clear winner. They'll help you train smaller, task‑specific models to cut costs, or run default small models when they're sufficient. They also have an in‑house team for training custom models which sets them apart. If you notice that open models are too expensive or slow for your use case, reach out and they'll [train you a better model and host if for you](https://docs.inference.net/fine-tuning/introduction). ### Groq, Cerebras, and SambaNOva: Speed demons These providers use custom silicon (Groq's LPUs, Cerebras' wafer‑scale chips) to achieve genuinely incredible latency. Consistently sub‑200 ms time‑to‑first‑token and top-tier tokens-per-second, and nothing else comes close. But here's the reality: they host very few large models. Yes, you can run DeepSeek or Kimi on them, but they're plagued by capacity issues and their pricing makes them completely unsuitable if you're optimizing for cost. My approach: use their free tiers for latency‑critical paths (voice interfaces, real‑time features), then fall back to conventional providers. They're burst providers, not bulk providers. Groq's whisper (transcription) models are great though. Super fast and reasonably cheap. ### Google: Pretty, pretty, pretty good. Google's position is fascinating. They're competitive across both small and large reasoning models, with excellent latency and aggressive pricing. Gemini Flash and Flash‑Lite are genuinely fast and cheap. Their free tiers are generous, and if you're a startup or enterprise, you might be able to negotiate credits. They are fast, cheap, and scale well. The trade‑off: closed‑source models. Many enterprises prefer open weights they can audit and deploy on‑premise. Teams that just want inference that works increasingly turn to Google. It's genuinely hard to think of something negative to say about Google as an LLM provider, other than the fact that occasionally OS models outperform Google's in performance. Still, the big labs, Google, and OS are constantly in a battle and Google is consistently a major player. One thing about Google is that their offerings are extremely confusing, between Vertex AI and Gemini. Here's how when I'd use Google: 1. You're very sensitive to price. Google's Flash is also the clear current market leader in small model use cases like translation and classification. ![image](https://hackmd.io/_uploads/HJBIy6SUxe.png) ### Together and Fireworks: The GPU warehouses Together has built what amounts to a giant GPU cluster that can run a diverse array of open‑source models. They're reliable, enterprise‑grade, and prices reflect that – they've been creeping up compared to Inference, Novita, and Deepinfra. A privilege they get for being one of the earliest enterprise-grade inference providers. Fireworks started as inference‑optimization specialists but have similarly moved up‑market. Still known for very fast, reliable inference. Both are solid choices when you need to run specific open models at scale with real SLAs. Pick whichever is faster/cheaper for your particular model. ### DeepInfra and Novita: Racing to the bottom DeepInfra and Novita are run by super‑cracked engineers who are among the best in the world at optimized model serving. They exist to win the OpenRouter price leaderboards and will do anything to serve models as cheaply as possible. What's interesting is that Novita has recently shifted towards enterprise in the style of Together and Fireworks, while maintianing bottom-of-the-barrel pricing. DeepInfra goes even further and serves models at approximately the price of electricity. ### Mistral: The enterprise play Mistral has figured out enterprise sales. They're French, they have open‑source models, and they're exceptionally good at navigating procurement departments. If you're an enterprise that needs on‑premise deployment or specific compliance guarantees, Mistral will work with you. If you're a European company and want to make sure you're on the right side of regulations they are a great choice. Individual developers will rarely choose Mistral. Their document‑parsing API exists but isn't competitive – specialized providers like Reducto and Chunkr do that job far better. This is a company optimized for selling to large organizations, not indie hackers. ## LLM Provider Cheat Sheet (2025 Edition) Here's my simple cheat sheet if you don't want to read all of the above. Need prototyping speed? → OpenAI Coding? → Anthropic Claude Real-time voice? → Groq (until quota runs out) Batch/Async jobs? → Inference.net, Google Complex PDFs? → Reducto, Chunkr (I'd avoid using pure LLM providers, which aren't great at document parsing) Compliance-heavy enterprise? → Mistral, Azure Open model variety at scale? → DeepInfra, Fireworks Dirt cheap Inference: Inference.net, DeepInfra Model distillation/fine-tuning without experience: Inference.net ## What I actually do I just use OpenRouter. Every single provider has reliability issues and OpenRouter deals with this. But using OpenRouter isn't a great choice if you need enterprise SLA's or want to save on their 5.5% fee. Many users start with OpenRouter then migrate to a single provider that serves their needs. ## The 'Best' Provider The "best" provider changes monthly. New models launch, prices shift, and free tiers evaporate. The only winning strategy is to stay loosely coupled and make switching cheap. With OpenAI‑compatible APIs everywhere, portability is easier than ever. Build with that in mind from day one or just use OpenRouter, and you can chase the best price/performance ratio as the market evolves. I am always interested in hearing how others navigate this landscape; reach me on Twitter at [@michael_chomsky](https://twitter.com/michael_chomsky).