# Jan Server
This is a list of Problem Statements that we will be working towards
Long-term, Jan Server is competing with:
- "Self-Hosted AI"
- ChatGPT Enterprise
- Cohere, Mistral etc
- OpenWebUI, AnythingLLM Server
- LLM Gateway/Router
- LiteLLM, Portkey, etc
## User 1: Jan Cloud
### Jan's Cloud
```
- Jan Cloud version: https://ask.jan.ai
- Jan Desktop can connect to Jan Cloud
```
### Jan Cloud is open source and self-hostable
- Jan Server powers Jan Cloud
- Apache 2.0 licensed outside of `/ee` folder
```
- /ee licensed
- Billing
```
### Jan Cloud to collect Model Training data
```
- LMArena-style comparison of 2 models or agents
- RLHF (vote up/down)
- Comparison (similar to Yupp or LMArena)
```
Note: Code for Data Collection should be Enterprise Licensed (i.e. `/ee`)
### Jan Cloud support Teams
```
- Jan for Teams
- Signing up allows to create subdomain
- URL: https://menlo.jan.ai
- SaaS
- Easier than self-hosting
```
Note: Code for Teams should be Enterprise Licensed (i.e. `/ee`)
### Jan Cloud to do billing
- Dan's note: Enterprise Licensed feature
```
- Consumer subscription plan
- Pay-as-you-go
- Similar to OpenRouter
```
Note: Billing Code should be Enterprise Licensed (i.e. `/ee`)
### Jan Cloud to offer MCP-as-a-Service
- Offer MCP Servers as APIs
- Jan Cloud run (e.g. we self-host Firecrawl)
- Possible: route to existing MCP Providers
- "OpenRouter for MCPs?"
- e.g. Exa (passthrough billing)
```
- Run popular MCP servers as-a-Service
- e.g. Firecrawl or Opencrawl
- e.g. Browser Use
- e.g. Search
- Route to popular MCP services
- e.g. Exa
- Focus on powering https://ask.jan.ai first
```
## User 2: Enterprise Self-Host
- Primary: AWS/GCP/Azure/Oracle
- Secondary: On-Premise
### "I want to give my staff ChatGPT"
```
- 50 person org that wants to share access to Gemini, GPT-5, and Local Models
- GPT-5
- Local Kimi K2
- Provision accounts for your staff, but not pay per-seat licensing
- ChatGPT for Teams: $20/head
- Claude for Teams
- Typing Mind
- Dogfood this internally within Menlo
- Enterprise-friendly Features
- Audit trails
- PII Detection
- Smart Routing
- If PII detected, route to local model
```
Customer: Kuok Group
### "I want to deploy my own custom Agent"
```
- Startup might choose to deploy their model on Jan Server
- Local or Remote model
- Custom MCP or Tools
- Collect Evals
- Similar to how people use OpenWebUI
```
- Dan's instinct: the Agent composes on top of Jan Server APIs
- Jan Server provides Models and MCP Servers
- i.e. a Repo
- Dify-style Visual Designers have not worked well
- Agents move to code, after the MVP
- n8n etc are great, but fragile
- Better for Jan Server to complement n8n, vs. compete
- MCP, Models and Evals
Customer: Kuok Group
### "I want a scalable Ollama"
```
- Basic MLOps
- vLLM based (vs. llama.cpp)
- Model Management
- `/models/pull` (pull from HF)
- `/models/delete`
- `/models/load`
- Autoscaling of models on GPUs
- "Serverless"-style scaling
```
Customer: Kuok Group
