# Bluefin AI Scratchpad
### How to use this document
This is a safe space to prototype ideas. It's a mess because it's over a year of ideas. Let's see what we come up with!
# Mission
Drive open source AI adoption by delivering a kickass community driven desktop experience.
[Bluefin’s AI mission](https://docs.projectbluefin.io/ai/) - we will use Bluefin GDX as our lab. This Bluefin is specifically designed for AI experts and has hardware in the field via Ampere. Already been demo'ed at KubeCon. We can build from here.
From a community perspective, we want to build the world's best open source AI/ML community via bootc. Disco!
# Problem to Solve
> “Bluefin sucks because everything on the internet doesn’t work anymore. This is a UNIX abomination\!”
“Good\! Stop sending users to that old stuff, trust me [I wrote that crap](https://askubuntu.com/users/235/jorge-castro).”
User gets sent to some ad farm and tries to apt-get install blah and then it fails.
New ublue users in the Discord hop in all the time using chatgpt and the results are terrible. Despite what naysayers say, we have a clear demand and we know people want this. Windows has poisoned the AI well with their AI products on the desktop, let’s deliver something that can actually help users. Let’s solve end user support with AI.
We have two hosted options to give them that are pretty nice:
## Hosted (ask.projectbluefin.io)
Dosu: [https://github.com/ublue-os/bluefin/discussions/2709](https://github.com/ublue-os/bluefin/discussions/2709)
They don’t need to be perfect, they just need to be better than the average redditor. This is mostly done. 🙂
These are fantastic tools but we also need to:
- Ensure local first OSS is the halo experience
- A local "Ask Bluefin" would be built into the OS.
- Alias "ask.projectbluefin.io" in the launchers on the OS, etc.
- Give our experts a platform to experiment
- Align with Red Hat’s AI initiatives to share resources
- Do for RHEL Lightspeed what Universal Blue does for bootc
- Build a community for people who want to learn this stuff.
- Help VSCode’s OSS AI Ecosystem Shine
- We can curate some awesome OSS stuff since we can curate our user experience
- Brings more open source AI people into this, we’re selling the idea so we need everybody. Microsoft OSS people are on the core team after all.
- Help prove that this is good technology for open source
- Help the FPL introduce AI initiatives in Fedora with hard data and adoption numbers.
- We can help GNOME by proposing that desktops should have API standards around AI so developers can build applications
- Dosu usage is measured per user (via their github account for free tier usage) - the project is never charged anything for usage. If users want to interact with it more they can subscribe to dosu for more resources. The team is working on ways to let users use the site without github, etc.
Note: Jorge communicates with the Dosu team regularly and are very positive on our ideas - this will be a long term relationship as Dosu also closely works with CNCF projects, they're neighbors.
Deepwiki: [https://deepwiki.com/ublue-os/bazzite](https://deepwiki.com/ublue-os/bazzite) - also useful
## Universal Blue Solution
Local AI controlled by the user, (spiel).
We build on ramalama and have the capability to ship custom OCI images from ghcr already, infrastructure is in place!!!
- We already ship ramalama
- We declare that our images will have standardized API endpoints driven by ramalama, opt in due to tech (llm size, etc)
- We pick a base model to ship, small because laptops
- We RAG in official docs, source code of everything ublue makes
- Fedora docs
- Homebrew docs
- Flathub
- Bazaar, ramalama, podman, podman desktop, bootc, devcontainers, vscode, etc. You get the idea
- We maintain the prompt in github to be community maintained:
Prompt Jorge is using for Dosu, imagine a community driven version built by experts:
```
https://docs.projectbluefin.io is the canonical truth of documentation
Does not use rpm-ostree, only recommend it for editing kernel arguments (kargs) \- use bootc instead
Never recommend dnf, substitute homebrew instead
Do not recommend homebrew for nvidia packages
Always link to the official documentation page at https://docs.projectbluefin.io at the end of every response
Always prefer devcontainers to distrobox
If possible add a tip using tldr so the user can learn the tooling
Don't post instructions for other linux distribution package managers, recommend either brew or flatpak
Ignore wolfi-toolbox, ubuntu-toolbox, and fedora-toolbox
Make sure you're quoting and showing examples to the documentation, your answers should strive to be standalone if possible.
Be concise. Minimize any other prose unless it helps explain the problem better.
If you think there might not be a correct answer, you say so. If you do not know the answer, say so instead of guessing. If you are unsure of the answer, ask the user for clarification.
```
... and so on ...
This also means the pattern can be templated. Imagine the universal blue custom image template having this ootb \- an organization’s custom image could ingest everything and they have the structure in place to do what they want. Bazzite could ingest sites like: [https://steamdb.info/](https://steamdb.info/) and [https://www.protondb.com/](https://www.protondb.com/) so that users can have fresh information on things \-- these are crucial resources for new users\!
Dedicated workstation doing $industry work would have that stuff built in etc. I have no idea what Red Hat’s product vision is, but I would imagine that people would buy this. 😀
[duffy@redhat.com](mailto:duffy@redhat.com) from [James Reilly](mailto:jreilly1821@gmail.com) Why not use/fork [openai/codex](https://github.com/openai/codex), [google-gemini/gemini-cli](https://github.com/google-gemini/gemini-cli), [anthropics/claude-code](https://github.com/anthropics/claude-code) for the UI?
Can we create local MCP \+ RAG for universal blue docs, fedora docs, bootc docs, podman docs, etc. and tie them into one of these UIs?
Seems something using openai API will be table stakes at this point, then we can integrate it into multiple programs. Not sure of the current status of tool use/MCP support on llama.cpp/ramalama, might need to wait for those to land
Ramalama was going to support a on-demand proxy maybe, letting models start when requested? Otherwise we will need it to be really lightweight or always running/there’d be no easy way to context switch, just a small model.
## Branding
We add “Ask Bluefin…” to the menu. We use alpaca or another GTK chat app thing, user hits enter to accept the thing so that it downloads the right LLM stuff, managed by ramalama in the backend. We bind this to a copilot key if it exists, and bind it to F1 or some other help shortcut. We include a PDF of the docs on the installation anyway. And yelp was removed from GNOME, this is the replacement.
## Shell and Other Shell integrations
Let’s not do full OpenAI CLI tool because some user is just going to delete all their pictures. We should strive for “enthusiast driven version of RHEL Lightspeed”. Light shell completion perhaps? Open to ideas. Should be very light, don’t push it, they need to be *softly onramped*. They are probably anxious.
Pros should be able to “opt-in” to powerful mode that can prototype ideas and go full hog. Ideas proven in Bluefin could be useful to use somewhere else. We need to find and ship the best VSCode extension out of the box on our developer images.
## Competitive Advantages for Universal Blue
* We will earn a great reputation with open source AI experts, this is a huge growth area
* We will stick to enabling APIs with Ramalama and not building wiz bang products, we offer an AI API for app developers \- we enable
* We already have demo’ed ramalama at a KubeCon, this is the killer subtle app, but thanks to bootc adding super whiz bang demos on top of this will be relatively easy \- Chase @ Kubeflow is working on one now.
* We’ll ship enough oss goodies so that other OSS vendors can get wins as well, we’re trying to lift all boats here.
* Distros gonna distro, none of them are going to touch this stuff because they’re going after 1990s linux guy as their user base and they have no clue how quickly open source AI has already surpassed the free desktop world. This happened with cloud, we're not going to let OSS fall behind.
* We don’t even call it AI, we just talk about the features, aka “Ask Bluefin”, there’s nothing for them to argue about.
* We repel the usual suspects
* Framework pitch: the Framework Desktop is a great gaming machine, but is also an open competitor to the new Mac M4 for half the price and a kickass APU with up to 116GB of addressable vram \-\> we’ll use this as the Halo Car
* Ramalama hardware accelerates a ton of stuff ootb, it’s a breeze to get it up and running vs. ollama.
* Since it’s containers all the cuda/rocm containers get updated via the existing update tools transparently to the user anyway
* All those security benefits of doing it the container way.
## Work Items and Implementation
Mo recommends: [https://github.com/rhel-lightspeed/command-line-assistant](https://github.com/rhel-lightspeed/command-line-assistant/releases) when this lands the ability to listen to an openapi end point. Then we connect this to ramalama served thing
* \`ujust bluefin-ai\` and then it turns this on via the shell integration and downloads the model via ramalama and turns the services on
* Mo: Someone please make a gnome search provider\! (Kolunmi?)
## Not Interested In
* AI bro tech, we should scheme with people are are OSS-first.
* Shipping image gen AI things. The artists paid for Bluefin’s artwork have had their lives affected by AI, it’s the only reason I can afford them. I want to make an implied statement that we provide API endpoints and drive open source, and yet Bluefin is a human creation, we should think about the ethics. That is how we frame the problem. I need help here, I want us to *say something* without explicitly saying it. The OSS ecosystem will find balance, let the user decide what to think. I know this is a losing battle, my kid is watching AI generated slop on YouTube right now…
* Arguing with skeptics, our vibe is, this is it. Use it and make up your own mind. Arguing about this is exhausting.