# embedded app deploy ## boundary of portability * [ Oasis – a small, statically-linked Linux system](https://news.ycombinator.com/item?id=39142748) * 半世紀爭論, shared/static link,套件管理器...。 TLDR 這其實可以看作是一個 optimization problem,不同 scale 的 system 在 cpu speed, memory, storage... 有不同的 loss 函數,自然會有不同的結論。 For an optimization problem, best result only exist when env is well-defined. That imply, only meta build system may be possible to solve the exact case by case answer. However, if there is a heuristic that usually enough close to best answer, we may adopt it. --> The heuristic is as much of static link as possible nowaday. ### FHS > move this discussion to other file linux Filesystem Hierarchy Standard (FHS) 之所以決定把 所有套件的binary 都放在一起 e.g. `/bin`, `/usr/bin`, `/usr/local/bin`, `~/.local/bin` ,基本假設是 系統必有 pkg manager。放在一起,有利於 pkg manager 尋找 manifest。而pkg manager 可以很好的 trace pkg manifest。 如果要自己裝,`/opt` 是唯一慣例,但所有的 directory 要自己處理,e.g. add `/opt/xxx/bin` to `$PATH`。(with tarball) ### shared link rpath 寫死路徑,導致不好 install to user home。就算是 root system path,也要 move to same place。 [Can I change 'rpath' in an already compiled binary?](https://stackoverflow.com/questions/13769141/can-i-change-rpath-in-an-already-compiled-binary) ### non-portable cause from config file config 檔的路徑是主要導致 program 無法移動的原因,config file 在linux 上大量使用。但 program 需要知道如何找到 config file --> path of config file。 如果是 compile 時(pre-insall)就寫死,那當然改不了位置。如果是 post-install 是用 shell script 指定,安裝到位置後,script 裡的path 一樣是fix 的。 大部分人尋求 sys pkg e.g. deb,就是因為 deb "framework" 強制 post install。至少不會在 compile time 寫死路徑,portability 對一般安裝還夠用。 #### new spec to follow follow XPG spec, 共通的規範讓 app 自行尋找,創建 config。而不需要指定。 * [ What would the typical Linux user prefer? To use /home for config, or not to? ](https://www.reddit.com/r/linux/comments/xfwdgp/what_would_the_typical_linux_user_prefer_to_use/) * [XDG Basedir directories for Windows [closed]](https://stackoverflow.com/questions/43853548/xdg-basedir-directories-for-windows) app 自身有好的config 方式是好,但 deploy system 的職責是要考慮 worst case。 --- * appimage * [How does an appimage persist settings between launches?](https://askubuntu.com/questions/1009888/how-does-an-appimage-persist-settings-between-launches) * [Using portable mode](https://docs.appimage.org/user-guide/portable-mode.html) ### glibc version [ Can two C libraries (glibc & musl or) coexist & function properly on an operating system? ](https://www.reddit.com/r/linux/comments/8mxnlb/comment/dzrh68r/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) musl single binary 的答案更接近可以。 ### tool require during cross compile 在 cross compile 時,如果 code 需要額外的 tool require,需要以 host sys arch 為準 而非 target sys arch。類似的問題在整個 cross compile 都有。所以 cross compile 時,build env 要有明確的對應抽象。 ### offline embedded sys senario * 就算 有 pkg manager,dependency conflict 也需要降版才能解決 * 而且在offline or no compiler 的地方基本等於無解。 * shared lib 本來就式微,flash 貴的前提不存在了。 * 而且每一個 distro 的 pkg manager 都不同,portability 差。 ### kernel module kernel module usually sadely unable to deploy with self deps。 ## goal * one method fit all (offline embedded sys) * one and only one (one, best method to solve corresponding problem) ## practice type 現在最好的 standard 就是 appimage, snap, flatpak 這種 containerized (overlay filesystems)。 > 是否有不適合這種類型的app? conatiner like docker(by porter): it work for server, but not realtime system。 or opt pkg type。 kernel module 必須 binary 形式,只有opt pkg type 可以。 ### sys pkg sys pkg 的本質就是 define post install stage,並透過 shell script 要求 target OS 在 post install stage 完成最終的 config。 ref: [2. Linux制作deb包的方法](https://doc.embedfire.com/linux/rk356x/build_and_deploy/zh/latest/building_image/make_pakage_of_deb/make_pakage_of_deb.html) > 引入 shell script 得到彈性,但天生就 introduce 不確定性... 次之,如果有 .deb 當然好,而且 .deb 對 kernel module 是唯一選擇。或是需要 system config file 時。 但實際情況是 deb 本來有 implement 的就非常少,然後在 cross compile 時 又非常容易失效。強求基本等於 stay in emulated build。 且同上 criticism ,dependency collision 無解。 另offline 可以考慮 [apt-offline](https://gist.github.com/ycku/fa3110bce427cf2c1f1ca4a680d594e8)。但 multi-arch 是否有對應的就不確定了。 專用的 OS 和 工具 e.g.Torizon ,need $$$ > NixOS more search need > 他的賣點其實都是 production 才會用到。 > [Nix 和 NixOS:你们安利方法错了](https://nyk.ma/posts/nix-and-nixos/) > Infrastructure as Code 巔峰 > 會不會過頭? 其實 deb 的上限也高,deb 完全可以指定安裝在 opt,就和 pure install 一樣。 ### pure install make install 的瞬間也做完 post-install,路徑也被固化(config file & shell script),這是最大的問題。 > 但一方面也是得到一個確定性的結果。 make install 在 cross compile 的有效性好很多,配上`/opt` 適用範圍廣很多。(ubuntu 有`$HOME/opt`) 會有這個問題的大部分都是很難改的 autoconf, pure makefile project。改不了時要support deb 就更不可能了。 > 可能可以 develop opt tarball manager。 ## practice impl 1. autoconf build-in feature ```bash ./configure \ --prefix=/opt/my_app \ --host=arm-linux-gnueabihf make install DESTDIR=/path/to/staging_area/rootfs ``` 1. 雖然是原本的設計。但多個工具不同用法,記不完,很難說是好方法。 2. docker crosscompiler container 1. 目前看來反而是最可行的,且真的有成功。 4. chroot 1. 要一堆設定 e.g. mount,需要簡化好的工具。 2. `schroot`, 但還是複雜。 5. Build Systems (buildroot/yocto) 1. support 3rd lib 的過程和 conan 一樣,就是要 recipe(`Bitbake`), e.g. [firejail recipe](https://layers.openembedded.org/layerindex/recipe/313894/), [igh-ethercat](https://layers.openembedded.org/layerindex/recipe/166028/) 2. build system 要support 對應的方法 e.g. make, autoconf...。 3. 知道 kernel config, arch, kernel version, glibc version...,等於是 copy whole sys。有 recipe 了話,就可以基於這個系統產生任意binary at 任意 path, include `deb` . 4. [Meta Conan: A Yocto layer for Conan client](https://github.com/conan-io/meta-conan) 5. nix 應該算是這種 1. [KDAB Using Nix as a Yocto Alternative](https://www.kdab.com/using-nix-as-a-yocto-alternative/) 2. [ Yocto vs NixOS ](https://www.reddit.com/r/NixOS/comments/zlo8lr/yocto_vs_nixos/), [ Comparision to yocto ](https://discourse.nixos.org/t/comparision-to-yocto/47551) 3. [IgH EtherCAT Master for Linux](https://github.com/NixOS/nixpkgs/blob/nixos-25.05/pkgs/by-name/et/ethercat/package.nix#L35) 4. [ Automatically generating deb packages from nixpkgs ](https://discourse.nixos.org/t/automatically-generating-deb-packages-from-nixpkgs/25420) 1. `nix bundle`, fpm, nfpm 6. 選一個 distro 其實也是這種,只是結果的適用範圍不 universe。 6. gnu stow, chezmoi, yadm 1. ~~我其實看不太懂~~ 2. symlink tool,沒有固定的用法。 3. 最常用來管理 dotfiles。 4. 可以把 裝在 opt 的東西轉到 `usr/bin`... tarball 檔 在 CI/CD, docker build。 還有 ansible push model 底下更容易克服本來的問題。Infrastructure as Code。 ### suggestion of meta build system > Forest is better than a big/deep tree 如果pkgs 有龐大的dependency 關係,無可避免最後會因為 牽一髮 動很巨大的全身,最後動不了。難以 keep up to dated。 所以 dependency define 很好,但要再適當的時侯斷開,將處理交給內部的 dependency sub-system e.g. docker, appimage, language own pkg manager e.g. uv, conan, cargo...。 meta build system 可以重用這些工具,但不應該自己再定義一次。犯了 rewrite every thing 的問題。 很多 sys pkg manager 也都使用 pure code, e.g. `alpine aports`, `windows scoop`。rewrite only when it have fundamental error, 而非只是換個方法重複一樣的事。 按理來說,user 需要pc 就是為了 app 的功能,如果 每個 app 都有自己完整的 dependency,自成一棵 tree,那就無須安裝任何 lib。 > 就算是 executable depend on sub-executable 也可把 sub-executable 一起打包。 也就是說,為了避免 deep tree,拆小無可避免,但拆到最小,好像就不太需要pkg manager to resolve dependency。 當然 single executable 是例外情況。meta build sys 還是有用,只是要加入更多的 plugin, e.g. docker, uv...。 ## declarative and procedural docker 更接近 procedural, nix/bitbrake 更接近 declarative。 必有重疊需互相幫助,將 declarative 轉換成底下的 procedural。 ## Vision 簡單來說,如果 nix 失敗了 (無法涵蓋所有的 build case)。就表示 meta build system 為了包起 whole sys pkg/app 的複雜度可能已經超過 programmer 所能負荷(impractical)。或是 cost > gain(not benificial)。 也會同時證明,deps independent app 是必要的分割點。deps independent != static link,static 。static link 只是一個方法。 > 可參考 [Oasis – a small, statically-linked Linux system ](https://news.ycombinator.com/item?id=39142748) 裡,參考 alpine 做 static link 的人。 下面的 build deploy idea,雖然是以 self deps 為架構,但範圍內其實選 meta build or self deps 都可以,就看未來優先研究方向。 self deps 也要面對的是cobination explode 1.pkg type: {appimage, docker, static link bin, flathub, snap, opt only, general distro pkg type}, 2.build tool type: {container cross build, container emulated build, cmake, conan, autotools...} 都不一樣,然後交織。 opt pkg type 算必定可以,但有位置做代價。build type 無法控制。很多時候,pkg type 和 build tool type 是不可切換的。 基本上 pkg type 還是 appimage 綜合加的東西最少,限制最少。opt pkg 必可轉為 app pkg,appimage + firejail == flathub/snap。 還有作為 service 時如何處理,deps 就是 system(systemd) usually。但不可能所有 OS 都一樣的 init sys。老實說,我認為 self deps 需要放棄這裡,讓獨立的工具處理。 相反,meta build 不用處理上述,if it work。 或說兩者側重點不同,meta build 嘗試從 sys user/consumer 端解決(assume source code unable to change)。 self deps 比較想從 developer 的角度解決(consumer uncontrollable),同一東西適用更多的 distro,choose a best practice。 # deploy idea > Does any tool can work like a pkg manager of managed node based on `ansible`? > I am working lots of offline single board computers (control nodes) which only connect to managed node. > The tool (if existed) is expected to automatically compile/download executable based on architecture, kernel version and glibc version of control node and then copy to it. ```mermaid flowchart A[gather managed node info] B[control node create/select suitable build env] C["control create/download single executable applications(SEA)"] D[copy to managed node then install] E[pkg manager of managed node record what installed] A-->B-->C A-->C C-->D-->E ``` At C process, CI/CD docker file may used (best practice need). To support as many distro with same libs --> D state should have its own pkg management method usually. Or union of appimage, flathub, snap... ## claud sonnet4 > Looking at your question about package management tools that work with Ansible for offline single board computers, I can see you're looking for a solution that can: > > 1. Gather information about managed nodes (architecture, kernel, glibc versions) > 2. Build or download appropriate executables on the control node > 3. Deploy these to offline managed nodes > 4. Track installations on the managed nodes > > While there isn't a single tool that does exactly what you've described, there are several approaches and tools that can be combined to achieve this workflow: > > ## Existing Solutions > > ### 1. **Ansible + Custom Modules** > You can create custom Ansible modules that: > - Gather system information (`ansible_facts`) > - Build/cross-compile applications based on target specs > - Transfer and install binaries > - Maintain a local package database > > ### 2. **Nix with deploy-rs** > - Nix can build packages for different architectures > - `deploy-rs` can deploy to remote systems > - Works well for reproducible builds across different targets > > ### 3. **Buildroot + Ansible** > - Buildroot creates custom Linux distributions with cross-compiled packages > - Ansible can deploy the built packages to target systems > > ### 4. **Yocto Project + Ansible** > - Similar to Buildroot but more enterprise-focused > - Can generate packages for specific hardware configurations > > ### 5. **Custom Solution with Ansible** > Here's a conceptual approach you could implement: > > ```yaml > # ansible playbook structure > - name: Gather target info > setup: > register: target_facts > > - name: Build/select appropriate binary > delegate_to: localhost > shell: | > ./build_for_target.sh {{ target_facts.ansible_architecture }} > {{ target_facts.ansible_kernel }} > {{ target_facts.ansible_libc_version }} > > - name: Deploy binary > copy: > src: "/tmp/built/{{ app_name }}-{{ target_facts.ansible_architecture }}" > dest: "/usr/local/bin/{{ app_name }}" > mode: '0755' > > - name: Update local package registry > lineinfile: > path: /var/lib/local-packages/installed > line: "{{ app_name }}-{{ version }}-{{ target_facts.ansible_architecture }}" > ``` > > ## Closest Existing Tools > > 1. **Habitat** - Can build and deploy applications but may be overkill > 2. **Docker + Ansible** - Use multi-arch containers, extract binaries > 3. **Conan** - C++ package manager with cross-compilation support > 4. **Bazel** - Can cross-compile for different targets > > ## Recommendation > > For your specific use case with offline SBCs, I'd suggest creating a custom Ansible-based solution that: > > 1. Uses Ansible facts to gather target information > 2. Maintains a build farm or cross-compilation environment on the control node > 3. Uses a simple file-based package registry on managed nodes > 4. Leverages existing cross-compilation tools (like buildroot, crosstool-ng, or docker buildx) > > Would you like me to help you design or implement any specific part of this solution? I can help create Ansible playbooks, build scripts, or package management utilities based on your specific requirements. ## gpt-4.1 > There isn’t a single tool that does exactly what you describe out-of-the-box, but you can combine existing tools and workflows to achieve this. Here’s a breakdown: > > ### Existing Tools & Approaches > > 1. **Ansible + Custom Roles/Modules** > - Ansible can gather facts (`setup` module) about managed nodes (arch, kernel, glibc). > - You can write custom roles to select/build/download the correct binaries. > - Use `copy` or `synchronize` modules to transfer executables. > - Maintain a local record (e.g., a YAML/JSON file) as a simple package database. > > 2. **Nix or Guix** > - Both are functional package managers that can cross-compile and produce portable binaries. > - Nix can build static or semi-static binaries for different architectures. > - You can use Ansible to deploy these binaries. > > 3. **Bazel or Buildroot** > - For more complex builds, Bazel or Buildroot can generate binaries for specific targets. > - Use Ansible to deploy. > > 4. **Custom Ansible Plugin** > - You could write a plugin that automates the process: gather facts, build/download, deploy, and record. > > ### Example Workflow > > 1. **Gather Node Info** > - Use Ansible’s `setup` module to get architecture, kernel, glibc. > > 2. **Build/Select Binary** > - On the control node, use a script or tool (Nix, Buildroot, custom Makefile) to build or select the right binary. > > 3. **Deploy** > - Use Ansible’s `copy` or `synchronize` to transfer the binary. > > 4. **Record** > - Maintain a simple package database on the managed node (e.g., `/var/lib/my-pkgmgr/installed.json`). > > ### No Out-of-the-Box Solution > > There is **no single tool** that does all of this automatically as a “package manager for managed nodes via Ansible.” > However, **Ansible + Nix** or **custom Ansible roles** are the closest practical solutions. > > --- > > **Summary Table:** > > | Tool | Cross-compile | Deploy | Record Installs | Offline Support | > |--------------|--------------|--------|-----------------|-----------------| > | Ansible | No | Yes | Custom | Yes | > | Nix/Guix | Yes | Yes | Yes | Yes | > | Buildroot | Yes | No | No | Yes | > | Custom Script| Yes | Yes | Custom | Yes | > > --- > > **Recommendation:** > Use **Ansible** to orchestrate, and **Nix** or a custom build system to produce binaries. Write a simple Ansible role to handle deployment and record-keeping. > > If you want a sample Ansible role or workflow, let me know! ## summary direction ansible 就是 remote os independent advanced shell script 而已,沒有 rollback... 的功能,就是 oneshot,同 dockerfile。或是乾脆當成一組 python lib。 1. define combinational/orchestrated managed node env 2. build this env on control node (doppelganger) 3. sync doppelganger and managed node (mechanism implement required) ansible 只適合 implement mechanism,或 call above process,不適合直接使用。rsync, scp? 1. 先試用 nix create pkg 2. pkg to standalone deb pkg in a pool 3. ansible install all pkg in pool ## stage 1 其實 nix create OS 的 build 很久很多,而且設定甚至到驅動。恐怕沒那麼通用。 以dependency 的角度,是不是所有 program 都應該 depend on system, kernel...,我覺得不需要。如果真的都要 depend on sys了話,當然久。 > ansible 那個部份,先假設已經安裝好的情況。 ## log 7/14 igh 的 kernel module 似乎遇到麻煩,pkg 似乎只提供 userspace。所以 ec_master 要自己編。 如果 nix 不 work,考慮 yocto 之類的。 kernel module 這種東西才是真正需要 meta build sys 的。