# 實作 Linux LVM Live Migrate Storage
## 目標 1
在不停止服務且不停機的情況下,將 LV 的資料從舊的 PV 轉移到新的 PV 上,並測試如果兩個 LV 共用同一個 PV 的情況下,執行兩次 `lvconvert` 會不會覆蓋掉第一次的 mirror。
### 開始實作
```bash!
# 1. 幫一台 VM 新增 6 顆大小同樣都是 10 G 的硬碟,並使用 root SSH 登入 VM
# 2. 做成 PV
$ pvcreate /dev/{sdb,sdc,sdd,sde,sdf,sdg}
Physical volume "/dev/sdb" successfully created.
Physical volume "/dev/sdc" successfully created.
Physical volume "/dev/sdd" successfully created.
Physical volume "/dev/sde" successfully created.
Physical volume "/dev/sdf" successfully created.
Physical volume "/dev/sdg" successfully created.
# 3. 使用 3 個 10G 的硬碟 sdb sdc sdd 創建 vgdata,使用 sde sdf sdg 替換原來的 3 個磁盤。
$ vgcreate vgdata /dev/sdb /dev/sdc /dev/sdd
Volume group "vgdata" successfully created
# 4. lvcreate 創建兩個 lv, 一個 25G,將跨 sdb sdc sdd,一個 4G lv, 在 sdd 上。
$ lvcreate -L 25G -n data vgdata
Logical volume "data" created.
$ lvcreate -L 4G -n share vgdata
Logical volume "share" created.
# 5. 檢查 LV 正在使用的裝置
$ lvs -o lv_name,lv_size,devices
LV LSize Devices
root 99.99g /dev/sda2(0)
data 25.00g /dev/sdb(0)
data 25.00g /dev/sdc(0)
data 25.00g /dev/sdd(0)
share 4.00g /dev/sdd(1282)
# 6. 建立目錄區
$ mkdir -p ~/{data,share}
# 7. 將兩個 LV 格式化成 XFS 檔案系統
$ mkfs.xfs /dev/vgdata/data
$ mkfs.xfs /dev/vgdata/share
# 8. 掛載 LV
$ mount /dev/vgdata/data ~/data/
$ mount /dev/vgdata/share ~/share/
# 9. 每 5 秒產生 1 KB 的檔案到對應的目錄區
$ cat << EOF > test.sh
#!/bin/bash
# 設定檔案名稱的前綴
prefix="file"
# 設定檔案的大小(以 KB 為單位)
size=1
# 無窮迴圈,每 5 秒產生一個新檔案
while true; do
# 產生一個隨機的檔案名稱
filename="\${prefix}_\$(date +%Y%m%d%H%M%S).txt"
# 使用 dd 命令產生指定大小的檔案
dd if=/dev/zero of=/root/data/date-"\$filename" bs=1024 count=\$size &> /dev/null
dd if=/dev/zero of=/root/share/share-"\$filename" bs=1024 count=\$size &> /dev/null
# 等待 5 秒
sleep 5
done
EOF
# 10. 賦予執行權限
$ chmod +x test.sh
# 11. 在背景執行程式
$ ./test.sh &
[1] 1813
# 12. 確認檔案都有產生
$ ls -l data/ share/
data/:
total 4
-rw-r--r-- 1 root root 1024 May 15 15:08 date-file_20240515150800.txt
share/:
total 4
-rw-r--r-- 1 root root 1024 May 15 15:08 share-file_20240515150800.txt
# 13. 此時 extend vgdata
$ vgextend vgdata /dev/sde /dev/sdf /dev/sdg
Volume group "vgdata" successfully extended
# 14. 創建 mirror
$ lvconvert -m 1 /dev/vgdata/data /dev/sde /dev/sdf /dev/sdg
Are you sure you want to convert linear LV vgdata/data to raid1 with 2 images enhancing resilience? [y/n]: y
Logical volume vgdata/data successfully converted.
$ lvconvert -m 1 /dev/vgdata/share /dev/sdg
Are you sure you want to convert linear LV vgdata/share to raid1 with 2 images enhancing resilience? [y/n]: y
Logical volume vgdata/share successfully converted.
# 15. 創建完成後,等待同步狀態完成
$ watch -n 0.1 lvs -o name,copy_percent
LV Cpy%Sync
root
data 100.00
share 100.00
# 16. 拆掉 mirror
$ lvconvert -m 0 /dev/vgdata/share /dev/sdd
Are you sure you want to convert raid1 LV vgdata/share to type linear losing all resilience? [y/n]: y
Logical volume vgdata/share successfully converted.
$ lvconvert -m 0 /dev/vgdata/data /dev/sdb /dev/sdc /dev/sdd
Are you sure you want to convert raid1 LV vgdata/data to type linear losing all resilience? [y/n]: y
Logical volume vgdata/data successfully converted.
# 17. 查看 lv 使用的裝置是否成功更換
$ lvs -o lv_name,lv_size,devices
LV LSize Devices
root 99.99g /dev/sda2(0)
data 25.00g /dev/sde(1)
data 25.00g /dev/sdf(0)
data 25.00g /dev/sdg(0)
share 4.00g /dev/sdg(1284)
> 確保都已經遷移完成。
# 18. 移除不需要的 3 個 PV
$ vgreduce vgdata /dev/sdb /dev/sdc /dev/sdd
Removed "/dev/sdb" from volume group "vgdata"
Removed "/dev/sdc" from volume group "vgdata"
Removed "/dev/sdd" from volume group "vgdata"
# 19. Kill process
$ kill -9 1813
## 忘記 Pid 的話
$ jobs -l
[1]+ 1813 Running ./test.sh &
```
### Clean up ENV
```bash!
$ rm -rf ~/data/* ~/share/* && \
umount ~/data ~/share && \
rmdir ~/{data,share} && \
lvremove -y /dev/vgdata/{data,share} && \
vgremove vgdata && \
pvremove /dev/{sdb,sdc,sdd,sde,sdf,sdg}
```
執行結果 :
```bash!
Logical volume "data" successfully removed.
Logical volume "share" successfully removed.
Volume group "vgdata" successfully removed
Labels on physical volume "/dev/sdb" successfully wiped.
Labels on physical volume "/dev/sdc" successfully wiped.
Labels on physical volume "/dev/sdd" successfully wiped.
Labels on physical volume "/dev/sde" successfully wiped.
Labels on physical volume "/dev/sdf" successfully wiped.
Labels on physical volume "/dev/sdg" successfully wiped.
```
---
## 目標 2
```bash!
# 1. 模擬資料佔用 data 這個 LV 的總空間的 60%
$ dd if=/dev/zero of=/root/data/date-15g bs=1024M count=15 status=progress
16106127360 bytes (16 GB, 15 GiB) copied, 313 s, 51.5 MB/s
15+0 records in
15+0 records out
16106127360 bytes (16 GB, 15 GiB) copied, 312.856 s, 51.5 MB/s
# 2. 模擬資料佔用 share 這個 LV 的總空間的 60%
$ dd if=/dev/zero of=/root/share/date-3g bs=1024M count=3 status=progress
3221225472 bytes (3.2 GB, 3.0 GiB) copied, 68 s, 47.5 MB/s
3+0 records in
3+0 records out
3221225472 bytes (3.2 GB, 3.0 GiB) copied, 67.861 s, 47.5 MB/s
# 3. 檢查是否符合預期
$ du -sh ~/{data,share}
15G /root/data
3.0G /root/share
# 4. 把 PV 加進來
$ vgextend vgdata /dev/sde /dev/sdf /dev/sdg
Volume group "vgdata" successfully extended
# 5. Data LV 做 Mirror
$ lvconvert -m 1 /dev/vgdata/data /dev/sde /dev/sdf /dev/sdg
Are you sure you want to convert linear LV vgdata/data to raid1 with 2 images enhancing resilience? [y/n]: y
Logical volume vgdata/data successfully converted.
# 6. 檢查 PE Free 的空間是否耗盡
$ pvs /dev/{sde,sdf,sdg}
PV VG Fmt Attr PSize PFree
/dev/sde vgdata lvm2 a-- 10.00g 0
/dev/sdf vgdata lvm2 a-- 10.00g 0
/dev/sdg vgdata lvm2 a-- 10.00g 4.98g
# 7. share LV 做 Mirror
$ lvconvert -m 1 /dev/vgdata/share /dev/sde /dev/sdf /dev/sdg
Are you sure you want to convert linear LV vgdata/share to raid1 with 2 images enhancing resilience? [y/n]: y
Logical volume vgdata/share successfully converted.
# 6. 檢查 LV 裝置
$ lvs -o lv_name,lv_size,devices
LV LSize Devices
root 99.99g /dev/sda2(0)
data 25.00g data_rimage_0(0),data_rimage_1(0)
share 4.00g share_rimage_0(0),share_rimage_1(0)
# 7. 檢查 share_rimage 實際上在哪顆 Disk 之下
### 因 sde 和 sdf 這兩個 PV 的 PE 用盡,所以 share 的 LV 只會同步到 sdg 上
$ lsblk
...
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdd 8:48 0 10G 0 disk
├─vgdata-data_rmeta_0 254:3 0 4M 0 lvm
│ └─vgdata-data 254:1 0 25G 0 lvm /root/data
├─vgdata-data_rimage_0 254:4 0 25G 0 lvm
│ └─vgdata-data 254:1 0 25G 0 lvm /root/data
├─vgdata-share_rmeta_0 254:7 0 4M 0 lvm
│ └─vgdata-share 254:2 0 4G 0 lvm /root/share
└─vgdata-share_rimage_0 254:8 0 4G 0 lvm
└─vgdata-share 254:2 0 4G 0 lvm /root/share
...
sdg 8:96 0 10G 0 disk
├─vgdata-data_rimage_1 254:6 0 25G 0 lvm
│ └─vgdata-data 254:1 0 25G 0 lvm /root/data
├─vgdata-share_rmeta_1 254:9 0 4M 0 lvm
│ └─vgdata-share 254:2 0 4G 0 lvm /root/share
└─vgdata-share_rimage_1 254:10 0 4G 0 lvm
└─vgdata-share 254:2 0 4G 0 lvm /root/share
# 8. 確認同步完成
$ lvs -o name,copy_percent
LV Cpy%Sync
root
data 100.00
share 100.00
# 9. 拆 Mirror
$ lvconvert -m 0 /dev/vgdata/share /dev/sdd
Are you sure you want to convert raid1 LV vgdata/share to type linear losing all resilience? [y/n]: y
Logical volume vgdata/share successfully converted.
$ lvconvert -m 0 /dev/vgdata/data /dev/sdb /dev/sdc /dev/sdd
Are you sure you want to convert raid1 LV vgdata/data to type linear losing all resilience? [y/n]: y
Logical volume vgdata/data successfully converted.
# 10. 查看 lv 使用的裝置是否成功更換
$ lvs -o lv_name,lv_size,devices
LV LSize Devices
root 99.99g /dev/sda2(0)
data 25.00g /dev/sde(1)
data 25.00g /dev/sdf(0)
data 25.00g /dev/sdg(0)
share 4.00g /dev/sdg(1284)
```
## 參考資料
[In what case(s) will `--type mirror` continue to be a good choice / is not deprecated?](https://unix.stackexchange.com/questions/697364/in-what-cases-will-type-mirror-continue-to-be-a-good-choice-is-not-depre)