# openZFS
OpenZFS is an open-source storage platform that encompasses the functionality of traditional filesystems and volume manager.
It includes:
- Protection against data corruption. Integrity checking for both data and metadata.
- Continuous integrity verification and automatic “self-healing” repair
- Hardware-accelerated native encryption
- Support for high storage capacities — up to 256 trillion yobibytes.
[For most users the kABI-tracking kmod packages are recommended in order to avoid needing to rebuild OpenZFS for every kernel update](https://openzfs.github.io/openzfs-docs/Getting%20Started/RHEL-based%20distro/index.html).
These kmod packages are “kABI-tracking”. The drivers they provide will work across all Enterprise Linux (EL) kernel releases, meaning there is no need to reinstall them upon each kernel update.
We will use OpenZFS for:
- Configuring RAID6 which is called RAIDZ2 in terms of ZFS.
- For enabling encryption.
Links:
- [_A good starting point might be this introduction - here_](https://www.udemy.com/share/101G6O3@79SCD81zWaWDW8U5d2w_zGcO37cwPrDKSguLinhhBkMOoG738LIkWJJAicPvW3Gm/)
- [A quick-start guide to OpenZFS native encryption](https://arstechnica.com/gadgets/2021/06/a-quick-start-guide-to-openzfs-native-encryption/)
- [Creating fully encrypted ZFS pool](https://timor.site/2021/11/creating-fully-encrypted-zfs-pool/#encryption-key)
- [ZFS](https://wiki.archlinux.org/title/ZFS)
- [# ZFS / RAIDZ Capacity Calculator](https://wintelguy.com/zfs-calc.pl)
### openZFS WITHOUT ENCRYPTION
Here no ENCRYPTION is used. If you want to use encryption on openZFS Filesystem see section OpenZFS native encryption configuration.
```
##openZFS
#install openZFS
#download the RPM and enable kmod:
sudo dnf install -y https://zfsonlinux.org/epel/zfs-release-2-2$(rpm --eval "%{dist}").noarch.rpm
sudo dnf config-manager --disable zfs
sudo dnf config-manager --enable zfs-kmod
#install ZFS kmod
sudo dnf install -y zfs
#create a pool named datapool with RAIDZ2 && 4 disks
# check disks serial no.
lsblk --nodeps -o name,serial
#load zfs modules
sudo /sbin/modprobe zfs
#raid6
sudo zpool create -f datapool raidz2 sda sdb sdc sdd sde sdf sdg sdh sdi
sudo zpool status -x
# create filesystem on the disk
sudo zfs create datapool/fs
```
### To add spare disk to the pool
```
#sudo zpool add -f datapool spare sdj
sudo zpool add -f datapool spare sdx
```
### Replace a disk
```
#Replace
sudo zpool offline datapool sda
sudo zpool replace sda sdx
# Enable autoreplace, so if any disk fails it will be replaced by any spare disk in the pool
sudo zpool get autoreplace datapool
sudo zpool set autoreplace=on datapool
check again
sudo zpool get autoreplace datapool
```
### OpenZFS native encryption configuration
OpenZFS native encryption splits the difference: it operates atop the normal ZFS storage layers and therefore doesn't nerf ZFS' own integrity guarantees.
OpenZFS native encryption isn't a full-disk encryption scheme—it's enabled or disabled on a per-dataset/per-zvol basis, and it cannot be turned on for entire pools as a whole. The contents of encrypted datasets or zvols are protected from at-rest spying—but the metadata describing the datasets/zvols themselves is not.
**Important Notes**:
- You can't encrypt a pre-existing dataset or zvol—it needs to be created that way from the start.
- you cannot easily switch from interactive `prompt` to a keyfile once the dataset or zvol is created—so some planning ahead is called for here
- ZFS doesn't actually encrypt your data directly with a supplied passphrase; it encrypts your data with a pseudo-randomly generated master-key. Your passphrase unlocks that master-key, which then becomes available for use working with the volume itself
- It's worth noting that trying to `ls` an encrypted dataset that doesn't have its key loaded won't necessarily produce an error.
- Reloading the key doesn't automatically remount the dataset, either
- `Keyformat` can be either `passphrase`, `hex`, or `raw`. Passphrases must be between 8 and 512 bytes long, while both `hex` and `raw` keys must be precisely 32 bytes long. You can generate a raw key with `dd if=/dev/urandom bs=32 count=1 of=/path/to/keyfile` or HEX `openssl rand -hex 32 | sudo tee -a path-to-file`
- Remember: volumes without their keys loaded cannot be read from or written to, but they _can_ be replicated using the `-w` flag on `zfs send`! Maintenance-level ZFS operations such as scrubbing, resilvering, and even (dataset or zvol level) renaming also work fine on locked volumes.
```
# check `ls -la /dev/disk/by-id/` to make sure wich disks to use
# Use drive labels as drive model plus serial number
# use -f (force) if needed
sudo zpool create \
-o ashift=12 \
-o feature@encryption=enabled \
-O encryption=on \
-O keylocation=file:////etc/zfs/.zfs.hex \
-O keyformat=hex \
storage raidz2 \
ata-ST16000NM003G-2KH113_ZL2CM3TX \
ata-ST16000NM003G-2KH113_ZL2FGEDF \
ata-ST16000NM003G-2KH113_ZL2FD5WN \
ata-ST16000NM003G-2KH113_ZL2F43ZH \
ata-ST16000NM003G-2KH113_ZL2FLCXT \
ata-ST16000NM003G-2KH113_ZL2FJ4G4 \
ata-ST16000NM003G-2KH113_ZL2FC62D \
ata-ST16000NM003G-2KH113_ZL2FKXG3 \
ata-ST16000NM003G-2KH113_ZL2F59GD \
ata-ST16000NM003G-2KH113_ZL2FK7A7
# automatically expand pool when new disk is added
sudo zpool set autoexpand=on storage
zpool get autoexpand storage
# automatically replace failed disk with hot spare
sudo zpool set autoreplace=on storage
zpool get autoreplace storage
# enable LZ4 compression
sudo zfs set compression=lz4 storage
# disable access time - for better performance
sudo zfs set atime=off storage
# check keystatus
sudo zfs get keystatus storage
# create dataset (filesystem)
sudo zfs create storage/vault
# zfs set mountpoint=/mnt storage/vault
# default mountpoint '/datapool-name/filesystem-name'
sudo zfs get mountpoint storage/vault
# set another mountpoint
#sudo zfs set mountpoint=/media/zfs storage/vault
sudo zfs mount storage/vault
#list
zfs list -r storage/vault
zpool status storage
zpool get all storage
#get the history of all commands run on the pool
zpool history storage
```
### unload Encryption key
At the moment, our encrypted datasets are all mounted. But even if we unmount them and unload the encryption key—making them inaccessible—we can still see that they _exist_, along with their properties
```
# lsof +D <mountpoint> / check if busy
sudo zfs unmount storage/vault
sudo zfs unmount storage
sudo zfs unload-key -r storage
# will return `unavailable`
sudo zfs get keystatus storage
#still metadata available
zfs list -r storage
#zfs load-key
sudo zfs load-key -r storage
```
### Setting TLER on boot
Having disks in RAID, it’s good to have TLER enabled to rely on RAID for error recovery instead of internal hard drive recover
```
# check TLER setting
for i in /dev/sd[a-z]; do
sudo smartctl -l scterc $i
done
# enabel TLER
sudo smartctl -l scterc,100,100 /dev/sdd
# set TLER on boot
sudo chmod +x /etc/rc.d/rc.local
sudo tee -a /etc/rc.local << EOF
#TLER on boot
for i in ata-ST16000NM003G-2KH113_ZL2CM3TX \
ata-ST16000NM003G-2KH113_ZL2FGEDF \
ata-ST16000NM003G-2KH113_ZL2FD5WN \
ata-ST16000NM003G-2KH113_ZL2F43ZH \
ata-ST16000NM003G-2KH113_ZL2FLCXT \
ata-ST16000NM003G-2KH113_ZL2FJ4G4 \
ata-ST16000NM003G-2KH113_ZL2FC62D \
ata-ST16000NM003G-2KH113_ZL2FKXG3 \
ata-ST16000NM003G-2KH113_ZL2F59GD \
ata-ST16000NM003G-2KH113_ZL2FK7A7; do
smartctl -l scterc,100,100 /dev/disk/by-id/ata-ST16000NM003G-2KH113_\$i > /dev/null;
done
sudo systemctl enable rc-local.service
```
### openZFS decrypt on boot service
```
sudo tee /etc/systemd/system/zfs-load-key.service <<EOF
[Unit]
Description=Load encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zfs load-key -a
StandardInput=tty-force
[Install]
WantedBy=zfs-mount.service
EOF
sudo systemctl daemon-reload
sudo systemctl enable zfs-load-key
```
### Notes
```
[1]: #partitition tables are neccessary for all devices
sudo fdisk -l | grep /dev/sdX
parted /dev/sdX
print
mklabel #GPT table will be created
quit
#wipefs
sudo wipefs /dev/sdi
[2]: RAID configuration examples
#RAID0
zpool create raid0pool /dev/sda /dev/sdb #raid0 create -f / --force
#RAID10 - mirror
zpool create raid1pool mirror /dev/sda /dev/sdb #raid1
#RAID5
zpool create -f raid5pool raidz /dev/sda /dev/sdb /dev/sdg #raid5
[3]: zfs commands
zpool list
zpool list raid1pool
zpool status raid1pool
[4]: To destroy a pool
#Unmount
sudo umount -l /datapool
sudo umount -f /datapool
sudo zpool destroy datapool
[5]: if disk were used in a raid setup upfront
sudo cat /proc/mdstat
sudo mdadm --stop /dev/md/ddf0
sudo mdadm --remove /dev/md/ddf0
[6]: use stronger compression ZSTF if needed
sudo zfs set compression=zstd datapool/fs
[7]:
Remember: volumes without their keys loaded cannot be read from or written to, but they _can_ be replicated using the `-w` flag on `zfs send`! Maintenance-level ZFS operations such as scrubbing, resilvering, and even (dataset or zvol level) renaming also work fine on locked volumes.
```