# Spectrum Scale Workshop
將建立兩個節點的Spectrum Scale Cluster環境
並啟用NFS, SMB 及 WebGUI服務
[相關安裝資源](https://ibm.box.com/v/202109ESUN-IBM-SS)
---
# **:computer: 範例環境架構**:

---
# 0. Spectrum Scale
## 相關名詞/術語
- Cluster
- Node
- Quorum
- PagePool
- ...
- Filesystem
- Network Shared Disk(NSD)
- Metadata
- Pool
- ...

# 1. Spectrum Scale 環境準備
## 系統環境資源準備
下列VM環境使用VirtualBox 6.1版本建立
* CPU: 2
* MEM: 4GB
* OS disk: 30GB
* Network adapters:
* NIC1 - service: 192.168.56.x
* NIC2 - intra-comm: 10.10.10.x
* Shared disks: 5Gx2 for metadata + 5Gx2 for dataonly
本次Workshop使用 **Spectrum Scale 5.1** 版本,正式環境要求請參照
* [Hardware requirements](https://www.ibm.com/docs/en/spectrum-scale/5.1.0?topic=gpfs-hardware-requirements#linhard)
* [Software requirements](https://https://www.ibm.com/docs/en/spectrum-scale/5.1.0?topic=gpfs-software-requirements)
#### VirtualBox Guest Additions installation:
* `yum groupinstall "Development Tools"`
* `yum install kernel-devel elfutils-libelf-devel`
## 安裝作業系統(Minimun install)
- RHEL8.2 x86 with Minimun install
- 環境設定
- [ ] 設定 OS Repository
```bash=
mount /dev/sr0 /mnt
cat > /etc/yum.repos.d/media.repo << EOF
[AppStream]
name=Red Hat Enterprise Linux 8.2.0 - AppStream
mediaid=None
metadata_expire=-1
gpgcheck=0
cost=500
baseurl=file:///mnt/AppStream
[BaseOS]
name=Red Hat Enterprise Linux 8.2.0 - BaseOS
mediaid=None
metadata_expire=-1
gpgcheck=0
cost=500
baseurl=file:///mnt/BaseOS
EOF
```
- [ ] 安裝相關 RPMs
- `# yum install kernel-devel kernel-headers yum-utils cpp gcc gcc-c++ binutils ethtool elfutils elfutils-devel glibc-devel nfs-utils rpcbind psmisc iputils make python3 python3-distro numactl net-tools`
- [ ] 關閉SElinux
- `# sed -i s/enforcing/disabled/ /etc/selinux/config`
- `# systemctl reboot`
`使用getenforce指令檢查必須為 disabled`
- `# getenforce`
- [ ] 關閉防火牆
- `# systemctl stop firewalld`
- `# systemctl mask firewalld`
- [ ] 配置/etc/hosts
- 依照現有環境實際配置將兩端gpfs server進行定義
```bash=
# 格式: IP <FQDN> <Alias>
# boot ip
192.168.56.119 gpfs1.mars.lab gpfs1
192.168.56.120 gpfs2.mars.lab gpfs2
# gpfs intra-comm
10.10.10.1 gpfs1-comm.mars.lab gpfs1-comm
10.10.10.2 gpfs2-comm.mars.lab gpfs2-comm
# service ip 目前不會用到請先自行設定
192.168.56.121 vip1.mars.lab vip1
```
- [ ] 產生 SSH Key
```bash=
ssh-keygen -t rsa -N "" -m pem -f /root/.ssh/id_rsa
```
- [ ] 將ssh key copy至本機及其他FQDN, Alias上並使用ssh測試不須密碼登入
* `# ssh-copy-id localhost`
* `# ssh localhost`
* `# ssh gpfs1.mars.lab`
- [ ] 從任一台節點將產生的SSH Key複製過去
* `# scp ~/.ssh/* <另一個節點>:$PWD/.ssh`
- [ ] 修改ssh config(非必要)
```bash=
cat > ~/.ssh/config << EOF
Host *
GSSAPIAuthentication no
IdentityFile ~/.ssh/id_rsa
StrictHostKeyChecking no
User root
UserKnownHostsFile /dev/null
LogLevel ERROR
EOF
```
* 修改後須重啟sshd服務
- [ ] 關閉IPv6(非必要)
# 2. 安裝 Spectrum Scale
請將安裝檔案預先下載並上傳後解壓並執行
`# tar -zxvf Scale_DAE_install_5.1.1.0_x86_64.tar.gz`
`# ./Spectrum_Scale_Data_Access-5.1.1.0-x86_64-Linux-install --silent`
以下步驟只需要在**其中一台**執行即可
- Installation Toolkit (透過ansible方式佈署)
`# cd /usr/lpp/mmfs/5.1.1.0/ansible-toolkit`
- Setup
* `# ./spectrumscale setup -s 10.10.10.1`
- Add Node
* `# ./spectrumscale node add gpfs1-comm -q -m -n -p`
* `# ./spectrumscale node add gpfs1-comm -g`
* `# ./spectrumscale node add gpfs2-comm -q -m -n -p`
* `# ./spectrumscale config gpfs -c mycluster`
- Callhome disable
* `# ./spectrumscale callhome disable`
- Add NSD
* `# ./spectrumscale nsd add -p gpfs1-comm -s gpfs2-comm -u`<font color=red> metadataOnly </font> `/dev/nvme0n1 -n metansd1`
* `# ./spectrumscale nsd add -s gpfs1-comm -p gpfs2-comm -u`<font color=red> metadataOnly </font> `/dev/nvme0n2 -n metansd2`
* `# ./spectrumscale nsd add -p gpfs1-comm -s gpfs2-comm -u` <font color=blue> dataOnly</font> `/dev/nvme0n3 -n datansd1`
* `# ./spectrumscale nsd add -s gpfs1-comm -p gpfs2-comm -u` <font color=blue> dataOnly</font> `/dev/nvme0n4 -n datansd2`
- Filesystem Layer & Pool :open_file_folder:
* `# ./spectrumscale nsd modify -fs gpfsvol -po system -fg 1 metansd1`
* `# ./spectrumscale nsd modify -fs gpfsvol -po system -fg 2 metansd2`
* `# ./spectrumscale nsd modify -fs gpfsvol -u dataOnly -po datapool datansd1`
* `# ./spectrumscale nsd modify -fs gpfsvol -u dataOnly -po datapool datansd2`
- Protocol: NFS, SMB
* `# ./spectrumscale config protocols -f gpfsvol -m /gpfsvol`
* `# ./spectrumscale enable nfs`
* `# ./spectrumscale enable smb`
* `# ./spectrumscale config protocols -e 192.168.56.121`
- 目前設定狀態
* `# ./spectrumscale node list`
目前配置狀態如下

- Install Precheck
* `# ./spectrumscale install --precheck`
- Install
* `# ./spectrumscale install`
- Deploy Protocol
* `# ./spectrumscale deploy --precheck`
* `# ./spectrumscale deploy`
- 手動安裝
1. Configure GPFS repository
- `# /usr/lpp/mmfs/5.1.1.0/tools/repo/local-repo -r`
2. 安裝GPFS Packages:
- `# yum install gpfs.base gpfs.docs gpfs.msg.en* gpfs.compression gpfs.ext gpfs.gpl gpfs.gskit`
3. 安裝後設定gpfs指令路徑
```bash=
cat > /etc/profile.d/mmfs.sh << EOF
MANPATH=:/usr/lpp/mmfs/man
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/lpp/mmfs/bin:/root/bin
EOF
```
4. Builds the GPFS portability layer
- `# mmbuildgpl`
5. 建立 Cluster
- `# mmcrcluster -N "gpfs1:quorum-manager:" -C mysscluster`
- `# mmchlicense server --accept -N all`
- `# mmlsclsuter`
- 服務啟動
- `# mmgetstate -a`
- `# mmstartup -a`
- `# mmgetstate -a`

6. 建立 Network Shared Disk(NSD)
- `# mmcrnsd -F crnsd`
- `# mmlsnsd -X`

7. FileSystem
- `# mmcrfs gpfsvol -F crnsd -A yes -B 256K -T /gpfsvol `
- `# mmmount gpfsvol -a`
8. 參數配置
- Tiebreaker Disks
- `# mmchconfig tiebreakerdisks="metansd1"`
- cesSharedRoot
- `# mmshutdown -a`
- `# mmchconfig cesSharedRoot=/gpfsvol`
- `# mmstartup -a`
- Configure File Authentication
- `# mmuserauth service create --data-access-method 'file' --type 'USERDEFINED'`
- `# mmuserauth service list`

9. 啟用Cluster Export Services: NFS, SMB
* `# mmchnode --ces-enable -N gpfs1-comm,gpfs2-comm`
* `# mmces service enable NFS`
* `# mmces service enable SMB`

* 配置服務IP
* `# mmces address add --ces-node gpfs1-comm --ces-ip 192.168.56.121`
* `# mmces address list`

# Log
GPFS startup, shutdown, mount, configuration changes and errors
* `/var/adm/ras/mmfs.log.lastest` << 目前log
* `/var/adm/ras/mmfs.log.previous` << 前一次啟動的log
# :bulb: 關於 Spectrum Scale 指令
* `/usr/lpp/mmfs/bin` 下
* 所有管理的操作指令皆已`mm`打頭,格式是這樣子的
* mm{管理行為}{操作對象}
* 管理行為: 啟動(startup)、關閉(shutdown)、新增(add)、修改(ch)、刪除(del)...等
* 操作對象: 叢集(cluster)、節點(node)、檔案系統(fs)、磁碟(disk)、NSD...等
範例:
- `mmlscluster`
- `mmchconfig`
- `mmdelnsd`
- `mmaddnode`
* `-N` 代表特定節點
* `-a` 為所有節點
# Spectrum Scale 架構

* 服務名稱為 `mmfsd` 為multi-threaded程序,主要進行:
* 所有 I/O 和緩衝區管理
* 確保檔案一致性
* 對連續讀取進行快取
* 對異步 I/O 進行write behind
* etc...

## :books: Spectrum Scale (a.k.a GPFS) 參考資料
* https://www.redbooks.ibm.com/redbooks/pdfs/sg248254.pdf
## :memo: 今天沒提到的:
- 細部參數調整
- 新增移除NSD
- 抄寫