owned this note
owned this note
Published
Linked with GitHub
# Jan: A Robotics Company
![](https://hackmd.io/_uploads/B14bGyCTn.png)
_Joi from BladeRunner_
Jan is an AI company that builds the cognitive framework for future robots that augment humans and organizations.
Jan can be used to create:
- "Personal Assistants" like [Jarvis](https://www.youtube.com/watch?v=EfmVRQjoNcY) and [Cortana](https://www.youtube.com/watch?v=SuaCdq3Yi9E)
- "Personal Companions" like [Joi](https://www.youtube.com/watch?v=HXQmaObZrFM) and [C3PO](https://youtube.com/watch?v=eUH2_n8jE70)
- AI Employees that take on grunt work
## Plans
We make plans based on three parameters:
- 1-month sprint
- 3 month hypothesis
- 10 year vision
As a bootstrapped company, we earn the right to work towards the 10 year vision:
- Listen to user problems and solve them to get to pm-fit
- Practical focus on positive cashflow by focusing on problems that we will be paid to solve
## 1 Month Sprint
### Strategy
- Jan focuses infra for open-source AI that can be run privately, offline, and on-device.
- Counterposition to strategy of large industry players
| Jan's Counterpositioning | Current Trends |
| ------------------------- | ------------------------------------------ |
| Open Source (e.g. Llama2) | Closed-source (e.g. ChatGPT, Bard, Claude) |
| Personal AI | AI monopolies owned by Big Tech |
| Pay for Privacy | Monetize your users |
| Edge Computing | Cloud |
### Problem
- It is difficult to run open source AIs locally, without coding knowledge
- There isn't great hardware know-how for building AI computers for running AI locally
### Ideal Customer
- Enthusiast who wants to run open source AIs locally, either as AI companion or for productivity
- Businesses who want to run AIs locally, unable to use ChatGPT due to data security concerns
### Wedge Product
#### Desktop App
- Desktop app that runs open source AI models performantly on Windows, Mac and Linux
- Automatically senses hardware and intelligently decides on model engine, model size
- Goal: "It just works" experience for consumers
#### Hardware Guide
- Hardware guide on how to use consumer-grade hardware to run AI
- e.g. [Multi-GPU setups](https://pay.reddit.com/r/LocalLLaMA/comments/16lxt6a/case_for_dual_4090s/) for running Llama2 70b
- e.g. RAM-heavy machines for CPU inference using quantized GGUF models in Llama.cpp
## 3 month Hypothesis
### "Build your own AI"
- Jan's wedge "Desktop App" will grow to be a full OS to build and run AI
- Expose APIs that allow users to "build AIs"
- Targeted at semi-technical users (NOT low-code)
- Jan enables users to create human-like "AIs" or Robots"
- Simpler and less-opinionated framework vs. Agents
- User can build their own "AIs" or "Robots"
- Robot is a ["Mixture of Experts"](https://medium.com/@seanbetts/peering-inside-gpt-4-understanding-its-mixture-of-experts-moe-a)
- Can learn new skills (downloads HuggingFace, calls external AI APIs)
- Can "think" (LLMs)
- Can talk and "hear" (Whisper, Bark/Tortoise)
- Can "learn" new information (RAG, embeddings)
- Can learn processes (Selenium)
- Can "see" and understand images, PDFs (Tesseract)
- Can imagine and dream (StableDiffusion)
- Can talk to all of your service (ActivePieces)
- Can send messages (Telegram, Email)
- Can pay (crypto)
- Enabled by key primitives in initial Desktop App
- "JIT" model loading/unloading from VRAM
- Plugin-based architecture similar to VSCode or Obsidian
### Hardware Peripherals for Local AI
- Local AI will continue to be hardware-driven, as inference is FLOP-heavy
- Computation payloads starting to shift to GPU/NPU, and commodification of CPU
- Jan will have build a hardware division
- We are based in Asia are bilingual in Chinese, and thus have comparative advantage to R&D solutions for OEMs to manufacture
- Strong pools of talent in Taiwan (incumbent), Penang (challenger), Hanoi (up-and-coming)
- Possible hardware hypotheses to test
- e.g. Reliable PCIe4.0 x16 risers
- e.g. Portable chassis that support 2x3090s with cooling)
- Motherboards and that have high bus speeds between CPU-RAM and GPU-GPU (vs. NVLink, SLI)
- System-on-Chip with highly integrated RAM and CPU, ala Apple Silicon (for Linux and Windows)
## 10 Year Vision
### Humans and Thinking Machines
![](https://hackmd.io/_uploads/HyBIP6ZeT.png)
_Cortana in Halo_
- We are building towards a 10-year horizon where future robots augment humans and organizations
- C3PO from Star Wars
- Cortana from Halo
- We will explore different form factors
- Cybernetic: augmented reality, exoskeletons
- Robotic: C3PO-like robots
- This will require us to build up hardware and software expertise
- Achieving highest-quality thought on hardware-constrained edge computers
- Build know-how for chip miniaturization, FPGAs and ASICs
- Align with r/localllama, r/self-hosted, r/homelab, r/sffpc communities
- We will need to take open source approach and embrace community extensions and plugins
- Ecosystems vs. Products