Hello, I want share the cluster volume management installation using Longhorn documentation here.
## Table of Contents
1. Reference
2. Prequisites
3. Installation
4. Access with Ingress
---
### Reference
The installation based on the reference below:
https://longhorn.io/docs/1.5.3
---
### Prequisites
The longhorn must fulfill requirements in every worker node. The requirements can check automatically using by run the bash script in every worker node
```
$ curl -sSfL $ https://raw.githubusercontent.com/longhorn/longhorn/v1.5.3/scripts/environment_check.sh | bash
```
The output example if the bash script met all the requirements
> [INFO] Required dependencies 'kubectl jq mktemp sort printf' are installed.
[INFO] All nodes have unique hostnames.
[INFO] Waiting for longhorn-environment-check pods to become ready (0/3)...
[INFO] All longhorn-environment-check pods are ready (3/3).
[INFO] MountPropagation is enabled
[INFO] Checking kernel release...
[INFO] Checking iscsid...
[INFO] Checking multipathd...
[INFO] Checking packages...
[INFO] Checking nfs client...
[INFO] Cleaning up longhorn-environment-check pods...
[INFO] Cleanup completed.
``
---
### Installation
Longhorn can be installed on a Kubernetes cluster in several ways:
* **Kubectl** (using manifest)
* **Helm** (using values)
I used the kubectl method to install the Longhorn because something will be edited in the manifest file.
The manifest can be downloaded in [this link](https://raw.githubusercontent.com/longhorn/longhorn/v1.5.3/deploy/longhorn.yaml) and the output must redirect into a file.
```
$ curl https://raw.githubusercontent.com/longhorn/longhorn/v1.5.3/deploy/longhorn.yaml > longhorn.yaml
```
Edit the manifest.yaml file and change the ***numberOfReplicas*** with the actual number of worker node used
```
$ sudo nano manifest.yaml
```
```
apiVersion: v1
kind: ConfigMap
metadata:
name: longhorn-storageclass
namespace: longhorn-system
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.5.3
data:
storageclass.yaml: |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Delete"
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "2" # <===== change with number of worker node or replicas we want to
staleReplicaTimeout: "30"
fromBackup: ""
fsType: "ext4"
dataLocality: "disabled"
```
In example if we create **volume with 2Gi** storage, longhorn will create the volume in each worker node.
The volume will created in **/var/lib/longhorn/replicas** directory.

We can see the volume size is by our request but the actual storage used by the volume was shown in the bottom of picture.
#### Based on the picture, there are 2 types of volume in longhorn:
1. Volume head
2. Volume snap
The volume head holds the latest data of the volume only, while each snapshot may store historical data as well as active data, which consumes at most size space. Therefore, the volume actual size, which is the size sum of the volume head and all snapshots, is possibly bigger than the size specified by users.
The workflow is **User storing data => Write into Volume Head => Volume Snapshot**
The volume head size and volume snap is usually different. But when a snapshot is created, all the data is moved from the HEAD into a snapshot, and head is recreated from 0 bytes.
So, volume head and volume snapshot cant be delete because **there must be 1 volume head and at least 1 volume snapshot in longhorn system**.
### Access the dashboard
We can access the dashboard through Ingress that we can configured before
> [https://longhorn.openetra.net/](http://longhorn.openetra.net/)
Here the dashboard view of Longhorn UI
