Learn More →
raid
lvm
pv
storage stack
In the article, we'll focus on raid ztex
RAID ("Redundant Array of Inexpensive Disks" or "Redundant Array of Independent Disks") is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both.
LVM, or Logical Volume Management, is a storage device management technology that gives users the power to pool and abstract the physical layout of component storage devices for easier and flexible administration. Utilizing the device mapper Linux kernel framework, the current iteration, LVM2, can be used to gather existing storage devices into groups and allocate logical units from the combined space as needed.
LVM, Logical Volume Management 的簡稱, 一種將 physical layout 抽象化的技術. 讓我們更好管理 storage deivces. 利用 linux kernel 的 device mapper framework (see: https://elixir.bootlin.com/linux/latest/source/include/linux/device-mapper.h) 可以將 storage devices 丟進 group 並且根據需求 allocate logical units. ztex
pv...
Description: Physical block devices or other disk-like devices (for example, other devices created by device mapper, like RAID arrays) are used by LVM as the raw building material for higher levels of abstraction. Physical volumes are regular storage devices. LVM writes a header to the device to allocate it for management.
pv
是 physical volumes. physical volumes 對應 disk 上的 partition 或 disk 之類 LVM 會把 header 寫進 device 來 allocate ztex
vg...
Description: LVM combines physical volumes into storage pools known as volume groups. Volume groups abstract the characteristics of the underlying devices and function as a unified logical device with combined storage capacity of the component physical volumes.一堆 physical volumes 組成 volume groups (就是一個 storage pool). volume groups 抽象化下層的 device. 同時方便上層分配 LV ztex
lv...
(generic LVM utilities might begin with lvm...
)
Description: A volume group can be sliced up into any number of logical volumes. Logical volumes are functionally equivalent to partitions on a physical disk, but with much more flexibility. Logical volumes are the primary component that users and applications will interact with.有了 storage pool (volume group) 接下來切一切就變成了, 一塊塊的 logical volumes 可以看成 disk 的 partitions. (概念上啦) application 主要跟 lv 打交道 ztex
group 裡面的 volumes 會切成 extents. physical volume 切出 physical extents. logical volume 切出 logical extents. 一個 volume group 里的 extent size 都一樣. LVM 有一組 logical extents 對照 physical extents 的 mapping. 但不需要是 map 連續的 physical extents. ztex
$> lsblk
...
sata4 8:48 1 465.8G 0 disk
├─sata4p1 8:49 1 2.4G 0 part
│ └─md0 9:0 0 2.4G 0 raid1 /
├─sata4p2 8:50 1 2G 0 part
│ └─md1 9:1 0 2G 0 raid1 [SWAP]
└─sata4p5 8:53 1 461.2G 0 part
└─md3 9:3 0 922.3G 0 raid5
├─vg1-syno_vg_reserved_area 251:2 0 12M 0 lvm
├─vg1-volume_2 251:3 0 10G 0 lvm
│ └─cachedev_1 251:4 0 10G 0 dm /volume2
├─vg1-volume_3 251:5 0 10G 0 lvm
│ └─cachedev_2 251:6 0 10G 0 dm /volume3
└─vg1-volume_4 251:7 0 10G 0 lvm
└─cachedev_3 251:8 0 10G 0 dm /volume4
SHR
Basic
JBOD
RAID 1
RAID 0
RAID 5
RAID 6
RAID 10
RAID F1
比較特別的大概就是 SHR, RAID F1 這兩個是 Synology 特有的 RAID type SHR (Synology Hybrid RAID) 顧名思義就是混合型的 RAID 拿 RAID 5 來說, 對 RAID 熟悉的人大概就知道, Array size = Min Disk Size * (N-1) 好, 那今天假設你的 array 裡面有顆她媽小的硬碟你要怎麼辦? 你就是會一堆空間浪費 所以 Synology 就說, 浪費的硬碟空間也拿來做 RAID 好了, 最差的情況就是剩下兩顆硬碟, 為了保證都具有容錯能力, 所以兩個硬碟會走 RAID 1 RAID F1, 顧名思義就是 Flash (base on RAID 5) 了解, SSD 的物理特性大概都知道 SSD 有所謂的 life span, 這跟他最多可擦除的次數有關, 可以看看 PE (program erase), 所以今天假設你的 RAID 都是 SSD 然後 PE 相近, 乾, 那如果有一天死, 不就大家一起死, 這樣 RAID 搞毛, 至少同時間不能大家一起走吧, 所以, RAID F1, 選醫顆倒楣鬼硬碟, 在她身上多寫一份 parity, 阿只要要寫入, 就一定會寫到 parity (see rcw vs. rmw), 所以理論上這顆硬碟應該比較容易先走人. ztex
Also known as: rebuild、resync、data scrubbing、raid scrubbing
see: linux-4.4.x/Documentation/device-mapper/dm-raid.txt see: linux-4.4.x/drivers/md/dm-raid.c
$> cat /proc/mdstat
$> echo xxx > /sys/block/mdX/md/sync_action
Original: P = A ^ B ^ C Write A: A’ means we got to write new parity P’ as well P’ = A’ ^ B ^ C because B ^ C = P ^ A P’ = A’ ^ P ^ A rmw: read original P and A, then get P’ with A’ ^ P ^ A
Original: P = A ^ B ^ C Write A: A’ P’ = A’ ^ B ^ C rcw: read original B and C, then get P’ with A’ ^ B ^ C
drivers/md/raid5.c
see: https://elixir.bootlin.com/linux/latest/source/drivers/md/raid5.h#475 stripe size = page size = 4KB (x86)
cat /proc/mdstat
Create a new array
Stop md
Assemble md
Remove component in md
Add component to md
Show detail
Examine
md
and drive
/sys/block/mdX/
:
slaves
: all the disks in the array/sys/block/md3/md/array_size
: literally/sys/block/md3/md/layout
: see https://elixir.bootlin.com/linux/latest/source/drivers/md/raid5.h#695/sys/block/md3/md/level
: RAID level /sys/block/mdX/md/dev-X/
:
/sys/block/mdX/md/bitmap/
Features:
RAID1
Linear (JBOD)
RAID5
RAID6
/linux-4.4.x/-/blob/master/Documentation/device-mapper/dm-raid.txt