# ceph stabilisation
## objectives
* going back to mainstream
* either systemd
* or kubernetes -> let's do this
* being on latest version
* stable monitors
## hardware requirements
* kubernetes requires 3 stateful machines
* We'll use apus for that
* to store etcd on it
* We might even need 6?
* For storing the monitor data
* Maybe can be the same ones that host the etcd cluster
*
## networking
* The existing ceph cluster lives in 2a0a:e5c0:2:1::/64
* If we want to merge them, we probably need to move pods into this network
* We can use a different network while testing
## strategy / steps
* setup rook standalone using the disks in server22...server27
* Phasing in APUs as "stateful" servers
* Need to find out how to put ceph monitors and etcd data on these nodes
* create a plan on how to merge rook into the existing cluster
* Maybe start with ceph monitors (?)
* Then add a first node
* Then convert node-by-node into k8s
## configuring etcd cluster
* Guide: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/
* This uses an existing etcd cluster
* Maybe we can tell kubeadm to distribute on specific machines instead
* kubeadm init documentation: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
* phases: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/
*