# Jan: Strategy (Feb 2024) ## Summary We're going to make the following changes: **Product Direction** - Jan is a privacy-first desktop Copilot that you can customize - Help AI enthusiasts create, customize and use Copilots in natural UX flows - Focus on personal everyday use - Integrate both local and Cloud AIs **We will focus on** - Building a beautifully-designed, award-winning product - Building a tool we use every day (as AI enthusiasts) - Delight and love our community and users - Research "small ai" that is practical for everyday problems **We will not focus on** - Enterprise Sales (let it come naturally) - "Big" AI ## Our Product Direction ![image](https://hackmd.io/_uploads/SytDflqsT.png) *Inspiration: Grammarly* ### Be an AI Expert's favourite Personal AI tool We are going to focus on enabling our existing user community to: - Create and customize their own AI (ie. "Copilots") - Use AI in natural flows to solve everyday problems This means focusing on: - Copilot builder to customize AI (old name: Assistant Framework) - Jan Desktop: an [Alfred-like launcher](https://www.alfredapp.com/) to summon Copilots - Jan Home Server: "self-host" your customized AI - Jan Mobile: connect to Home Server or Desktop, with a [Keyboard Accessory View](https://reactnative.dev/docs/inputaccessoryview) to use AI in natural flows - Jan Chrome Extension: automatically use AI on websites So, we will close the gap between a developer and an AI enthusiast - when it comes to creating/building something. ### Where we are NOT going (for now): Enterprise AI We have spent 20 days investigating the Enterprise: - Enterprise AI is an incredibly noisy market full of talkers and grifters - Enterprise AI does not seem like a game that we can win, as it rewards bullshit - Partners network will take time to identify, educate and build trust - There is a lot of stupid work to close an Enterprise deal - There is a lot of non-highly leveraged work to educate Partners and Customers - That said, we have been lucky to find some Enterprise BD people, and will work with them slowly to open up the market Our highest leverage strategy is to focus on dominating the personal AI category, and let our community bring us into the enterprise naturally. This is a long-term, patient strategy that other companies have taken: - [Bitwarden](https://bitwarden.com/products/business/) - [Notion](https://www.notion.so/teams) - [Raycast](https://www.raycast.com/teams) In every company, there is somebody who will be the "AI guy". That person is our target market. If we are every enterprise AI guy's favourite personal AI tool = we're going to be just fine ## Our Objectives ### Objective 1: Beautifully Designed, Award-winning Product ![image](https://hackmd.io/_uploads/Sy8yBgcsa.png) *Things for Mac* We are going to double down on design and UX, with the goal of being an Editor's Choice app. We will need to deeply understand AI build simple human-computer interfaces that abstract technical complexity, are bug free and just "work". Jan's early culture needs to be a craftsman culture that rewards "build quality". I believe that only the best-crafted products that users love and invest time in customizing will survive this AI bubble. Inspirations: - [Things](https://culturedcode.com/things/) - [Alfred](https://www.alfredapp.com/) **Goals** - [Apple Editor's Choice Award](https://developer.apple.com/app-store/app-store-awards-2023/) - [Windows Store Award](https://blogs.windows.com/windowsdeveloper/2023/05/23/announcing-the-microsoft-store-app-awards-2023-winners/) - [Product Hunt Golden Kitty Awards](https://www.producthunt.com/golden-kitty-awards) ### Objective 2: Build a tool we use every day ![image](https://hackmd.io/_uploads/S1AHXxqs6.png) *Apple's Spotlight Search* We are going to focus on building a tool that we use ourselves everyday. Product-wise, we are going to solve a human-machine interaction problem: - We need to bridge the UX gap between powerful AI APIs and day-to-day life - How do I connect these powerful AIs to my everyday tools and systems? - What are the technical challenges we need to solve to make it seamless? We believe that AI is still in its early days and most people don't really use it. Solving how we apply it in everyday life is a difficult design and technology problem. - ChatGPT's retention rate is only 14% - Most businesses are intrigued but unimpressed by AI **Goals** - 100k DAU with 20% 30d retention rate by end-2024 - That's it ### Objective 3: Delight and love our users = our community ![image](https://hackmd.io/_uploads/Hkr-dg5ia.png) *Our Discord Stats* Jan now has 3,300 people in our Discord. This is our community and our tribe: AI enthusiasts, explorers, tinkerers. We have the opportunity to build tools for ourselves and them. We need to focus on delighting and loving our users with our product, community-support and communnity. If we succeed in building a beautiful product, Jan will naturally expand to the Enterprise as our community brings our tool into the workplace. **Goals** - 30,000 users in our Discord ### Objective 5: Enable "Small" AI for personal use-cases ![image](https://hackmd.io/_uploads/B1BE9gqop.png) *Jan's Copilot Framework* Moving forward, Foundry will focus on pushing the envelope on "smol" models behing a coherent roadmap of Personal AI, instead of running scattered, small-scale experiments. We are going to hyper-focus on personal use-cases: Personal Finance, Personal Health, Personal Productivity, and be a problem-driven research organization. By the end of 2024, our goal is to develop a leading "small" model: - Small enough to run at acceptable standards on 8gb of RAM - Good at forming calls to APIs, Data Connectors, OS-level APIs - Good at interpreting content responses from APIs with guided output - Able to interpret Voice, text (or route to various models) - Handles personal use-cases well: Personal Assistant, Personal Finance, Personal Health - Can be a fine-tune of a model, or we can also embark on a foundational model We will also need to develop tooling around "local-first" AI: - sqlite-like Vector DBs that don't suck - Routers that work on top of multiple "expert" small models - Integrates well with the growing open source AI stack: e.g. OpenInterpreter We will need to build inference tooling that allows "local-first" AI to be used in daily life, concurrently with other programs and applications on the computer - Inference server that only loads and unloads models when absolutely needed - A user should be able to go on a Zoom call and run an LLM at the same time