# Jan: Personal AI
> This is where I see Jan in Dec 2024 (10 month roadmap)
Jan is a Personal AI that you own, forever.
- Can learn knowledge, connect to your systems, do work for you
- Runs local-first for privacy, but can also connect to APIs (e.g. GPT-4)
- Customization-friendly with an powerful Extensions API
Jan is open-source, has been downloaded 250,000+ times, and has a vibrant developer and user community.
### Who we are
Jan is created and maintained by Jan Labs, a robotics company.
Jan Labs' mission is to advance human-machine collaboration. We will achieve that by building a cognitive framework for future machines that live in harmony with humanity.
# Products
## Jan
Jan is a Personal AI that can be summoned via "Quick Ask" hotkey or used as a Desktop app or Mobile app.
Jan is teachable:
- Can learn PDFs, documents, webpages
- Remembers all previous convos (there is an incognito mode for off-the-record convos)
- You can "correct" Jan's answers (fine-tunes the model under the hood)
Jan can use tools:
- Jan can browse the web, use the computer, do work for you
- Can connect to your tools via tool extensions (e.g. Gmail, Calendar, etc)
- Tools can retrieve information (e.g. Gmail search, Web search, Notion search)
- Tools can do actions (eg. use the computer, draft an email)
Jan uses a switchable default model:
- Jan supports both local LLMs and cloud APIs via Provider Extensions
- You can swap out the default model for the latest LLM, without losing knowledge
- You can configure Jan with default online and offline models
- Local LLMs are run using Cortex, which also handles model installation
- Cloud APIs are available as Provider extensions (e.g. GPT-4, Gemini, DeepInfra, Predibase)

## Cortex
> - I propose renaming Nitro -> Cortex
> - The value proposition is "coordination" (vs. "fast"), similar to the [prefrontal cortex](https://en.wikipedia.org/wiki/Prefrontal_cortex)
> - Cortex will long-term be a embeddable AI engine for robotics
Cortex is a production-grade, multi-modal local AI engine that is designed to run on consumer hardware.
Cortex emulates multi-modal AI by wrapping several libraries in a single binary:
- llama.cpp
- whisper.cpp
- TBD: stablediffusion.cpp
- Accessible via [OpenAI-compatible API](https://platform.openai.com/docs/api-reference)
Cortex implements production-grade AI engine features:
- Queue system and error recovery
- On-demand model loading, unloading and support for multiple active models
- Resource monitoring and observability
- GPU acceleration, but runs fine in CPU-only environments
Cortex handles local fine-tuning and training:
- Embedding generation
- Fine-tuning runs and jobs (TBD)
# Appendix
## Jan's Architecture
Jan has an modular architecture built on Extensions:
- Tool Extensions: can be function called by LLMs (e.g. APIs)
- Provider Extensions: handle inference and training
- Themes: customize UI

## Jan's Data Feedback Loop
