# Rook Notes
- Using "provider: host" for networking. No separate network namespace for Ceph pods.
https://github.com/rook/rook/blob/master/Documentation/ceph-cluster-crd.md#host-networking
- No separate Ceph "Cluster Network" (OSD only network). If necessary we would need to use the "multus" network provider:
https://github.com/rook/rook/blob/master/Documentation/ceph-cluster-crd.md#multus-experimental
- no req for this, please just use host network.
- Ceph OSD host level CRUSH failure domain. Not "rack-aware". Node "Topology Labels" would be required for rack level failure domains:
https://github.com/rook/rook/blob/master/Documentation/ceph-cluster-crd.md#osd-topology
- its expected that the team will add these
- example label: kubectl label node mynode topology.rook.io/rack=rack1
- Mon pod storage will not be PVC based, HostPath storage used instead. For production I would recommend using local SSD PVCs. <- why PVC rather than hostpath, we can do this, but it adds some complexity?
- Decided to stay with HostPath for mon storage (as long as underlying storage is SSD)
- Ceph mon/osd/mgr/mds pods will not have any placement settings. Would be a good idea for production.
https://github.com/rook/rook/blob/master/Documentation/ceph-cluster-crd.md#placement-configuration-settings
- For OSD devices I will be specifying which nodes and drive paths to use individually.
- ++ though we can do this less verbosely, I'd like to see this as it was in rdm9 as it allows more xplicit control over device names.
- No resource limits on Ceph pods.
- No dedicated db/wal devices.
- No Ceph device classes.
- PG Autoscaler enabled.
- Pools initially created with Ceph's default PG number.
- Use ext4 for filesystems on rbd devices
https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml#L64
- https://github.com/project-azorian/rook-ceph-aio/blob/master/rook-ceph-aio/templates/StorageClass-default.yaml, should be good to go.
- Rook toolbox pod deployed.
- Not setting up Prometheus/Grafana monitoring:
https://github.com/rook/rook/blob/master/Documentation/ceph-monitoring.md#prometheus-alerts
- Disable crash/log collectors.
- No custom annotations or labels added to pods
- Default health checks
- Default namespace rook-ceph
- Use krbd rather than rbd-nbd for Ceph-CSI to enable reconfiguring/restarting of csi without detaching devices.
- No custom ceph.conf settings, can be added via rook-ceph-override ConfigMap:
https://github.com/rook/rook/blob/master/Documentation/ceph-advanced-configuration.md#custom-cephconf-settings
- Default set of ceph mgr modules enabled
- Changes to rook-ceph chart values.yaml
- rook/ceph image
- cephcsi image
- registrar image
- provisioner image
- snapshotter image
- attacher image
- resizer image
- cephfsGrpcMetricsPort (conflict with calico)
- cephfsLivenessMetricsPort (conflict with calico)
- names of things:
- namespace: rook-ceph
- toolbox pod: rook-ceph-tools
- ceph block pool: rook-ceph-block
- ceph block storageclass: rook-ceph-block
- ceph filesystem: rook-cephfs
- ceph filesystem storageclass: rook-ceph-filesystem
- CephCluster: rook-ceph
- versions:
- rook: 1.5.4
- ceph: 15.2.8
- cephcsi: v3.2.0
- csi-node-driver-registrar: v2.0.1
- csi-provisioner: v2.0.0
- csi-snapshotter: v3.0.0
- csi-attacher: v3.0.0
- csi-resizer: v1.0.0
- ceph pools and associated storage classes
**Block:**
- **rados pools**:
-- rook-ceph-block
- **storageclass**:
-- rook-ceph-block
**File**:
- **rados pools**:
-- rook-cephfs-metadata
-- rook-cephfs-data0
- **cephfs volume**:
-- rook-cephfs
- **cephfs subvolumegroup**:
-- csi
**storageclass**:
-- rook-ceph-filesystem