# Jan's Roadmap Blogpost
- Goal 1: Partners should understand where we are headed
- Nvidia
- Microsoft
- ARM
- Huggingface
- Goal 2: Community should understand where we are going
- Shared on r/localllama
- Amplified by our partners
## Jan's Status Quo
> I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.
- So many models, but none provide a great UX
- Deep Research
- Tool use
- MCP
- Future of AI is integrated
- Application
- Model
- Tooling & Infra
Where we are now: we're not solving user needs or problems.
"Experimentation with AI" is a very niche and non-paying market, and we contribute upwards to llama.cpp for this
Our DNA is a product company, not a developer tool
## Jan's new Goals
### Making it simple
- Simplest, easiest
- "It just works"
- We choose defaults
- We cohere everything across a fragmented ecosystem
- Feature complete
- A full alternative to the closed ecosystem
- An actual viable replacement
- We are not optimizing for developers
- CS degree and a
- r/localllama account to use open source AI
- User should not have to know what these things are
- MCP server
- k_top, temperature
- llama.cpp
### Simple, but Prosumer-friendly
> All of the powerful stuff is still here.
> We're going to make it simple,
> and we're going to make it beautiful
- Prosumers
- It's like a Mac
- All the advanced stuff is in settings (MCP, etc)
- But: out of the box, "it just works"
- Personal AI
- Self-host
- It's all private
- Connect to your Gmail and everything you don't want Sam Altman to know
- Small Medium Enterprise that wants to self-host
- 2-50 employees
- Buy a Nvidia DGX Spark, plug it into the office router, and you have a full replacement for OpenAI
## What we're going to do
### Jan is focused on solving problems
- Agent that solves a problem
- Personal AI that you can trust
- Integrating your email, calendar, but running locally
- Languages that we care about (our team is based in geography)
> In the future, Jan for Teams will be focused on small medium Enterprises
- Easy connectors to critical enterprise systems
### "Model First" Approach
```
App + Agentic Model = ChatGPT (OpenAI)
```
- Agentic Model
- Jan will have our own model
- Finetuned on our application stack
- Deliver user value
- Jan's first model
- Jan Nano (local)
- Search capability
- Open sourced - just like everything
- What are we going to do
- Agentic tool calling
- Language and translation
### Hybrid AI: Local & Cloud
- Jan will always be local first
- However, we see a need to provide a cloud AI for user experience
- For most users on low spec devices, local AI is not possible
- Some of our team are in regions where mobile phones are $100
- Local AI is not possible
- Jan Self-Hosted
- Menlo API Platform (that you can self-host)
- Support a team using Jan Self-Hosted
https://www.reddit.com/r/LocalLLaMA/comments/1lk0cjv/jan_nano_deepseek_r1_combining_remote_reasoning/
## User Needs
### Consumer
- I want to compare Models
- Gemini Deep Research
- OpenAI Deep Research
- eg Msty
- I want to bring Memory (switchable model)
- Bring your own memory
- Providing context
- Switch between models
- I want to have Consumer Usecase
- Whisprflow
- Just replicate features
- I want to have a Personal AI
- Accessing computer context
- Searching computer
- Has all of my context
- Gmail, files, etc
### Enterprise
- I am a business that wants to self-host AI
- ICPs
- 20-50 employees
- $50 per month
- Be able to use AI with 100% guarantees
- Files will not end up in competitor's hands
- Use Nvidia Distribution Network
- DGX Spark
- Features
- Analytics: what is the team asking?
- Compliance, Audit
- Building AI Avatars
- Building Knowledgebases
### Services
- Nvidia has been advising us to start a services wing
- Custom Agents
- Digital Twins
- Virtual Avatars - HeyGen
- Features
- Whitelabel, custom domain etc
- For Enterprises
- Have an AI CEO - ask questions
- AI SDR
- AI Sales Rep
- Working with partners?
- Thinking Machines