---
tags: research
---
# Open-Channel SSD Overview
Open-Channel is accessible in two ways:
* **via user-space library**: `liblightnvm`
* **via kernel software FTL**: `pblk`, which exposes a regular block device that conforms to current HAL

# Open-Channel SSD simulator
Github: [OpenChannel SSD (QEMU-based)](https://github.com/OpenChannelSSD/qemu-nvme)
## Build Buildroot VM Image
Build a custom VM image: [Buildroot](https://www.csie.ntu.edu.tw/~yunchih/docs/linux/virtualization/#highly-customized-qemu-vm-with-buildroot)
* Because we need to add [extra kernel configuration](https://github.com/OpenChannelSSD/qemu-nvme#guest-kernel), we must build the kernel ourselves. You need to do the following two things in Buildroot's menu:
* Set target architecture: `x86_64`
* Set the kernel version: `4.17`
* Provide the kernel config from [here](https://raw.githubusercontent.com/OpenChannelSSD/qemu-nvme/master/kernel.config)
## ~~Build QEMU~~ (see update)
```bash
git clone https://github.com/OpenChannelSSD/qemu-nvme.git --depth 1
cd qemu-nvme
mkdir build
# sudo apt-get install git libglib2.0-dev libfdt-dev libpixman-1-dev zlib1g-dev
./configure --target-list=x86_64-softmmu --prefix=$(pwd)/build --extra-cflags=-w --python=$(which python2) --disable-docs --disable-linux-user --disable-user --disable-bsd-user --disable-spice --disable-vnc --disable-opengl --disable-vte --disable-gtk --disable-sdl --disable-virglrenderer --disable-glusterfs
make
make qemu-img
make install
```
Add a Open-Channel 2.0 SSD device:
```bash
./build/bin/qemu-img create -f ocssd -o num_grp=2,num_pu=4,num_chk=20 ocssd.img
```
~~Boot QEMU:~~ (see update)
```bash
# HINT:
# output/images/bzImage and output/images/rootfs.ext2
# are both from buildroot's output.
# Try to locate them ...
../tree/bin/qemu-system-x86_64 -enable-kvm \
-kernel ../buildroot-2020.02/output/images/bzImage \
-hda ../buildroot-2020.02/output/images/rootfs.ext4 \
-m 1G \
-append "root=/dev/sda rw console=ttyS0" \
-serial stdio \
-device virtio-rng-pci \
-device e1000,netdev=net0 \
-netdev user,id=net0,hostfwd=tcp::5555-:22 \
-blockdev ocssd,node-name=nvme01,file.driver=file,file.filename=ocssd.img \
-device nvme,drive=nvme01,serial=deadbeef,id=lnvm
# the last two lines are OCSSD-specific
```
## `pblk` testing commands
If you successfully boot the VM, try these commands in VM:
```bash
sudo nvme lnvm id-ns /dev/nvme0n1
sudo nvme lnvm init -d /dev/nvme0n1
# initialize a host-managed FTL block device
sudo nvme lnvm create -d nvme0n1 -n mydevice -t pblk -b 0 -e 7
# add a file-system on top of block device
sudo mkfs.ext4 /dev/mydevice
# test the file-system
sudo mount /dev/mydevice /mnt
```
## Build Debian VM to test user-space library`liblightnvm`
* [Github repo](https://github.com/OpenChannelSSD/liblightnvm/)
* [pytest test suites](https://github.com/OpenChannelSSD/liblightnvm/tree/master/tests/scripts)
The testsuites depend on `libcunit1` and `python-pytest`, which is easily available from Debian, but is not available in Buildroot. Because building those packages ourselves can be daunting due to recursive dependencies, we will build a Debian VM instead and install those packages using `apt-get`. We will demonstrate the following steps in detail:
1. Build custom kernel for Debian
2. Build Debian VM
3. Boot Debian VM
4. Install kernel inside the Debian VM
5. Build `nvme-cli` inside the Debian VM
6. Build `liblightnvm` inside the Debian VM
**Build custom kernel for Debian**:
Add these options to `.config`:
```
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_NVM=y
CONFIG_NVM_PBLK=y
CONFIG_NVME_CORE=y
CONFIG_BLK_DEV_NVME=y
```
Build Debian package:
```bash
cd /tmp # build the kernel in tmpfs to speed up
wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.5.13.tar.xz
tar Jxf linux-5.5.13.tar.xz
cd linux-5.5.13
make defconfig
vim # ... add the above options
make olddefconfig
make deb-pkg LOCALVERSION=-iis -j20
ls ../linux-image-5.5.13-iis_5.5.13-iis-1_amd64.deb # the Debian package is built
```
**Build Debian VM**:
```bash
virt-builder debian-10 -o /tmp/debian-10.img --format raw \
--root-password password:root \
--hostname iis-oc
# copy the kernel package into VM:
virt-copy-in -a debian-10.img linux-image-5.5.13-iis_5.5.13-iis-1_amd64.deb:/root
```
**Boot Debian VM**:
```bash
../tree/bin/qemu-system-x86_64 -enable-kvm \
-hda /tmp/debian-10.img \
-m 2G \
-nographic \
-device virtio-rng-pci \
-device e1000,netdev=net0 \
-netdev user,id=net0,hostfwd=tcp::5557-:22 \
-blockdev ocssd,node-name=nvme01,file.driver=file,file.filename=ocssd.img \
-device nvme,drive=nvme01,serial=deadbeef,id=lnvm
```
**Install kernel inside the Debian VM**
The initramfs and grub configuration will be automatically updated once `dpkg` is run:
```bash
dpkg -i /root/linux-image-5.5.13-iis_5.5.13-iis-1_amd64.deb
```
Reboot to the new VM.
**Build `nvme-cli` inside the Debian VM**:
We need the command `nvme` to test the functionality of OCSSD:
```bash
sudo apt install build-essential
wget https://github.com/linux-nvme/nvme-cli/archive/v1.9.tar.gz
tar zxf v1.9.tar.gz
cd nvme-cli-1.9 && make
./nvme lnvm id-ns /dev/nvme0n1
```
**Build `liblightnvm` inside the Debian VM**:
``` bash
sudo apt install libcunit1 libcunit1-dev python-pytest
wget https://github.com/OpenChannelSSD/liblightnvm/archive/v0.1.8.tar.gz
tar zxf v0.1.8.tar.gz
cd lightnvm-0.1.8
make
```
Follow their instructions to run test: [pytest test suites](https://github.com/OpenChannelSSD/liblightnvm/tree/master/tests/scripts), check successful and failed testcases
## Update: QEMU OCSSD Emulator
The QEMU emulator from the OpenChannelSSD Github repository is **buggy**. Use birkelund's development branch instead: https://github.com/birkelund/qemu/:
```
git clone -b ocssd/v4 https://github.com/birkelund/qemu/
```
The build instructions are the same. Boot QEMU as follow:
```bash
A=$(mktemp -p /tmp "$(date '+%Y%m%d-%H%M')-XXXX.img")
B=$(mktemp -p /tmp "$(date '+%Y%m%d-%H%M')-XXXX.img")
qemu-img create -f raw "$A" 0
qemu-img create -f raw "$B" 0
../tree/qemu-birkelund/bin/qemu-system-x86_64 -enable-kvm \
-hda debian-10.img -m 5G -smp 5 -device virtio-rng-pci \
-device e1000,netdev=net0 \
-nographic -netdev user,id=net0,hostfwd=tcp::5557-:22 \
-drive file=$A,if=none,id=blk0 \
-drive file=$B,if=none,id=blk1 \
-device ocssd,serial=deadbeef,id=nvme0 \
-device ocssd-ns,drive=blk0,bus=nvme0,nsid=1,num_grp=4,num_pu=8,num_chk=60,clba=4096,mccap=0x7 \
-device ocssd-ns,drive=blk1,bus=nvme0,nsid=2,num_grp=4,num_pu=8,num_chk=60,clba=4096,mccap=0x7
# mccap: enable vector copy supported, multiple resets, early resets (non-standard)
```
## Chih's feedback & log
#### 3/27
* It seems that after writing any data to a pblk device, we must `nvme lnvm remove mydevice` to persist the changes. However, after reboot, although ext4 can be mounted, data has been corrupted and nothing is left after `fsck`
* Both Linux kernel 4.17 & 4.19's `getrandom` syscall are problematic. It might be QEMU's problem, because the argument `virtio-rng-pci` doesn't seem to make a difference
## Platform Info
IP: `140.109.21.29`
Port: `10011`
SSH Command: `ssh -i chih-ting-lo-iis -p 10011 140.109.21.29`
The SSH private key (one-time link): [here](https://send.firefox.com/download/cdc4c832abc6762a/#Wd1WRXoaGPxr3khjRD7wYw)
## Sourcegraph
URL: http://sourcegraph.thycat.com