<style>
html, body, .ui-content {
background-color: #333;
color: #ddd;
}
.markdown-body h1,
.markdown-body h2,
.markdown-body h3,
.markdown-body h4,
.markdown-body h5,
.markdown-body h6 {
color: #ddd;
}
.markdown-body h1,
.markdown-body h2 {
border-bottom-color: #ffffff69;
}
.markdown-body h1 .octicon-link,
.markdown-body h2 .octicon-link,
.markdown-body h3 .octicon-link,
.markdown-body h4 .octicon-link,
.markdown-body h5 .octicon-link,
.markdown-body h6 .octicon-link {
color: #fff;
}
.markdown-body img {
background-color: transparent;
}
.ui-toc-dropdown .nav>.active:focus>a, .ui-toc-dropdown .nav>.active:hover>a, .ui-toc-dropdown .nav>.active>a {
color: white;
border-left: 2px solid white;
}
.expand-toggle:hover,
.expand-toggle:focus,
.back-to-top:hover,
.back-to-top:focus,
.go-to-bottom:hover,
.go-to-bottom:focus {
color: white;
}
.ui-toc-dropdown {
background-color: #333;
}
.ui-toc-label.btn {
background-color: #191919;
color: white;
}
.ui-toc-dropdown .nav>li>a:focus,
.ui-toc-dropdown .nav>li>a:hover {
color: white;
border-left: 1px solid white;
}
.markdown-body blockquote {
color: #bcbcbc;
}
.markdown-body table tr {
background-color: #5f5f5f;
}
.markdown-body table tr:nth-child(2n) {
background-color: #4f4f4f;
}
.markdown-body code,
.markdown-body tt {
color: #eee;
background-color: rgba(230, 230, 230, 0.36);
}
a,
.open-files-container li.selected a {
color: #5EB7E0;
}
</style>
# SUSE SAP HAE
## SUSE SAP Registaion Code
```shell=
SUSEConnect -r REGISTRATION_CODE -e EMAIL_ADDRESS
SUSEConnect -r xxxxx -e xxx@suse.com
```
> r = xxxxx
> e = xxx@suse.com
## Sample VM 資訊
> NS ip : 10.12.40.10
> hostname1 : peter-ha1
> ip : 192.168.11.40 , VIP-A1: 192.168.11.43
> ip1 : 10.12.40.10 , VIP-B1: 10.12.40.13
> hostname2 : peter-ha2
> ip : 192.168.11.41 , VIP-A2: 192.168.11.44
> ip1 : 10.12.40.11 , VIP-B2: 10.12.40.14
> hostname3 : ha3
> ip : 192.168.11.42 , VIP-A3: 192.168.11.45
> ip1 : 10.12.40.12 , VIP-B3: 10.12.40.15
## DNS 資訊
> NameServer1: 10.12.0.20
> NameServer2: 8.8.8.8
## SUSE SAP 12SP5 OS 安裝步驟
> 一開始的畫面
>
> 更改網卡
>
> 刪除註冊時不需要的內部網段網卡
> 
> 輸入 SCC 需要的資料,做path更新 (registry code 屬於機敏資訊故不提供畫面)
> Extension and Modules Selection 都不需要勾選直接下一步
> 
> 預設會 enable RDP選項(可不選)

> Add on Product 可都不選

> Suggested Partitioning 由於 Lab 硬體不足可略過

> Time Zone 選台灣

> Firewall "disable"

> Yast Network Settings (第二張網卡 hostname為空)
> 
> Hostname/DNS 設定
> NameServer1: 10.12.0.20
> NameServer2: 8.8.8.8
> 
## 停止HAE相關服務
```shell=
peter-ha1:~ # sudo systemctl stop corosync.service
Failed to stop corosync.service: Unit corosync.service not loaded.
peter-ha1:~ # sudo systemctl stop pacemaker.service
Failed to stop pacemaker.service: Unit pacemaker.service not loaded.
```
## 產出ha1的ssh-key
> 指令
> ```shell=
> ssh-keygen
> ```
```shell=
peter-ha1:~ # ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:aK7U2fvtklz+nrkvLBr6jY/9ME2cc9q18+//+ypwzi8 root@peter-ha1
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . . . |
| o S = o|
| + o . oo =o|
| . + ...Oo.oo.|
| . . o+BEoooo|
| . oo*=*OXB#|
+----[SHA256]-----+
```
## 將ha1的sshkey放到ha2
> 指令
> ```shell=
> sudo ssh-copy-id root@ipaddress
> ```
```shell=
peter-ha1:~ # sudo ssh-copy-id root@192.168.11.41
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.11.41 (192.168.11.41)' can't be established.
ECDSA key fingerprint is SHA256:/lhd5cDl0rbKp1+szPEF+rU+S/Rt9Y0irbicTzTtzuc.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.11.41'"
and check to make sure that only the key(s) you wanted were added.
```
> 重複以上動作
> ha2 >> ha1,ha3
> ha3 >> ha1,ha2
## 更新 /etc/hosts
> 將以下 設定新增進 /etc/hosts
> ```shell=
> 192.168.11.40 peter-ha1
> 192.168.11.41 peter-ha2
> 192.168.11.42 peter-ha3
> ```
```shell=
#
# hosts This file describes a number of hostname-to-address
# mappings for the TCP/IP subsystem. It is mostly
# used at boot time, when no name servers are running.
# On small systems, this file can be used instead of a
# "named" name server.
# Syntax:
#
# IP-Address Full-Qualified-Hostname Short-Hostname
#
127.0.0.1 localhost
# special IPv6 addresses
::1 localhost ipv6-localhost ipv6-loopback
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
192.168.11.40 peter-ha1
192.168.11.41 peter-ha2
192.168.11.42 peter-ha3
```
## NTP 設定
> Yast 進入 NTP
> 
> 選擇 [Continue]
> 
> 選擇 [Now and on Boot]
> 
> [Add] Server
> 
> NTP Server address
> ```shell=
> time.google.com
> ```
> 
> SLES 12SP4 NTP Command
> ```shell=
> ntpq -p
> ```
> ```shell=
> peter-ha1:~ # ntpq -p
> remote refid st t when poll reach delay offset jitter
> ==============================================================================
> time1.google.co .STEP. 16 u 40 64 0 0.000 +0.000 0.000
> ```
## 更新 PATCH (ha1-3)都需要做
```shell=
zypper install -t pattern ha_sles
```
## ha-init (ha1)做就好
> 指令
> ```shell=
> ha-cluster-init
> ```
```shell=
peter-ha1:~ # ha-cluster-init
Generating SSH key for hacluster
Configuring csync2
Generating csync2 shared key (this may take a while)...done
csync2 checking files...done
Configure Corosync:
This will configure the cluster messaging layer. You will need
to specify a network address over which to communicate (default
is eth0's network, but you can use the network address of any
active interface).
IP or network address to bind to [192.168.11.40]
Multicast address [239.173.164.97]
Multicast port [5405]
Configure SBD:
If you have shared storage, for example a SAN or iSCSI target,
you can use it avoid split-brain scenarios by configuring SBD.
This requires a 1 MB partition, accessible to all nodes in the
cluster. The device path must be persistent and consistent
across all nodes in the cluster, so /dev/disk/by-id/* devices
are a good choice. Note that all data on the partition you
specify here will be destroyed.
Do you wish to use SBD (y/n)? n
WARNING: Not configuring SBD - STONITH will be disabled.
Hawk cluster interface is now running. To see cluster status, open:
https://192.168.11.40:7630/
Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
Waiting for cluster...............done
Loading initial cluster configuration
Configure Administration IP Address:
Optionally configure an administration virtual IP
address. The purpose of this IP address is to
provide a single IP that can be used to interact
with the cluster, rather than using the IP address
of any specific cluster node.
Do you wish to configure a virtual IP address (y/n)? y
Virtual IP []192.168.11.43
Configuring virtual IP (192.168.11.43)....done
Done (log saved to /var/log/crmsh/ha-cluster-bootstrap.log)
```
## 檢查 ha 狀態(ha1)
> 指令
> ```shell=
> crm status
> ```
```shell=
peter-ha1:~ # crm status
Stack: corosync
Current DC: peter-ha1 (version 1.1.24+20210811.f5abda0ee-3.30.3-1.1.24+20210811.f5abda0ee) - partition with quorum
Last updated: Wed Mar 1 14:25:47 2023
Last change: Wed Mar 1 14:23:39 2023 by root via cibadmin on peter-ha1
1 node configured
1 resource instance configured
Online: [ peter-ha1 ]
Full list of resources:
admin-ip (ocf::heartbeat:IPaddr2): Started peter-ha1
```
> 加入的 VIP "192.168.11.43"
> 
> WEB UI https://192.168.11.40:7360
> USER: hacluster
> Password: linux
> 
## HA Join (ha2-3) 這兩台加入 cluster
> 指令
> ```shell=
> ha-cluster-join
> ```
> IP address host existing node 註冊(ha1)的ip (192.168.11.40, 10.12.40.10)
```shell=
peter-ha2:~ # ha-cluster-join
Restarting firewall (tcp=30865 7630 21064 3121 2224 9929 5403 5560, udp=5404 9929 5405)
Join This Node to Cluster:
You will be asked for the IP address of an existing node, from which
configuration will be copied. If you have not already configured
passwordless ssh between nodes, you will be prompted for the root
password of the existing node.
IP address or hostname of existing node (e.g.: 192.168.1.1) []192.168.11.40
Generating SSH key for hacluster
Configuring SSH passwordless with hacluster@192.168.11.40
Configuring csync2...done
Merging known_hosts
Probing for new partitions...done
Restarting firewall (tcp=30865 7630 21064 3121 2224 9929 5403 5560, udp=5404 9929 5405)
Hawk cluster interface is now running. To see cluster status, open:
https://192.168.11.41:7630/
Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
Waiting for cluster.....done
Reloading cluster configuration...done
Done (log saved to /var/log/crmsh/ha-cluster-bootstrap.log)
```
> 確認 ha1-3 狀態是否正常
> ```shell=
> peter-ha1:~ # crm status
> Stack: corosync
> Current DC: peter-ha1 (version 1.1.24+20210811.f5abda0ee-> 3.30.3-1.1.24+20210811.f5abda0ee) - partition with quorum
> Last updated: Wed Mar 1 14:42:45 2023
> Last change: Wed Mar 1 14:23:39 2023 by root via cibadmin on peter-ha1
> 3 nodes configured
> 1 resource instance configured
> Online: [ peter-ha1 peter-ha2 peter-ha3 ]
> Full list of resources:
> admin-ip (ocf::heartbeat:IPaddr2): Started peter-ha1
> ```
## 變更 VIP 名稱
```shell=
crm resource stop $(resourcename)
crm resource stop admin-ip
```
> admin-ip 已經停止
```shell=
peter-ha1:~ # crm status
Stack: corosync
Current DC: peter-ha1 (version 1.1.24+20210811.f5abda0ee-3.30.3-1.1.24+20210811.f5abda0ee) - partition with quorum
Last updated: Wed Mar 1 14:46:56 2023
Last change: Wed Mar 1 14:46:52 2023 by root via cibadmin on peter-ha1
3 nodes configured
1 resource instance configured (1 DISABLED)
Online: [ peter-ha1 peter-ha2 peter-ha3 ]
Full list of resources:
admin-ip (ocf::heartbeat:IPaddr2): Stopped (disabled)
```
> 重新命名指令
> ```shell=
> crm configure rename admin-ip VIP-A1
> ```
> 已經變更名稱 (admin-ip >> VIP-A1)
```shell=
peter-ha1:~ # crm configure rename admin-ip VIP-A1
ERROR: warning: unpack_config: Blind faith: not fencing unseen nodes
peter-ha1:~ # crm status
Stack: corosync
Current DC: peter-ha1 (version 1.1.24+20210811.f5abda0ee-3.30.3-1.1.24+20210811.f5abda0ee) - partition with quorum
Last updated: Wed Mar 1 14:50:38 2023
Last change: Wed Mar 1 14:50:11 2023 by root via cibadmin on peter-ha1
3 nodes configured
1 resource instance configured (1 DISABLED)
Online: [ peter-ha1 peter-ha2 peter-ha3 ]
Full list of resources:
VIP-A1 (ocf::heartbeat:IPaddr2): Stopped (disabled)
```
> 啟動停用的resource
```shell=
peter-ha1:~ # crm resource start VIP-A1
peter-ha1:~ # clear
peter-ha1:~ # crm status
Stack: corosync
Current DC: peter-ha1 (version 1.1.24+20210811.f5abda0ee-3.30.3-1.1.24+20210811.f5abda0ee) - partition with quorum
Last updated: Wed Mar 1 14:53:05 2023
Last change: Wed Mar 1 14:52:59 2023 by root via cibadmin on peter-ha1
3 nodes configured
1 resource instance configured
Online: [ peter-ha1 peter-ha2 peter-ha3 ]
Full list of resources:
VIP-A1 (ocf::heartbeat:IPaddr2): Started peter-ha1
```
## 設定 VIP timeout 時長
> 指令
> ```shell=
> crm resource meta VIP-A1 set failure-timeout 120
> ```
```shell=
peter-ha1:~ # crm resource meta VIP-A1 set failure-timeout 120
Set 'VIP-A1' option: id=VIP-A1-meta_attributes-failure-timeout set=VIP-A1-meta_attributes name=failure-timeout=120
```
## 設定第二組 VIP
```shell=
crm configure primitive VIP-B1 ocf:heartbeat:IPaddr2 params ip=10.12.40.13 op monitor interval=10 timeout=20 meta target-role=Started failure-timeout=120
```
## 搬移 VIP
> 指令
> ```shell=
> crm resource migrate VIP-B1 peter-ha1 -f
> ```
```shell=
peter-ha1:~ # crm resource migrate VIP-B1 peter-ha1 -f
WARNING: This command 'migrate' is deprecated, please use 'move'
INFO: Move constraint created for VIP-B1 to peter-ha1
INFO: Use `crm resource clear VIP-B1` to remove this constraint
peter-ha1:~ # crm status
Stack: corosync
Current DC: peter-ha1 (version 1.1.24+20210811.f5abda0ee-3.30.3-1.1.24+20210811.f5abda0ee) - partition with quorum
Last updated: Wed Mar 1 15:05:27 2023
Last change: Wed Mar 1 15:05:16 2023 by root via crm_resource on peter-ha1
3 nodes configured
2 resource instances configured
Online: [ peter-ha1 peter-ha2 peter-ha3 ]
Full list of resources:
VIP-A1 (ocf::heartbeat:IPaddr2): Started peter-ha1
VIP-B1 (ocf::heartbeat:IPaddr2): Started peter-ha1
peter-ha1:~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 7e:d5:27:a1:51:3d brd ff:ff:ff:ff:ff:ff
inet 192.168.11.40/24 brd 192.168.11.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.11.43/24 brd 192.168.11.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::7cd5:27ff:fea1:513d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 6e:24:60:82:ed:93 brd ff:ff:ff:ff:ff:ff
inet 10.12.40.10/16 brd 10.12.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.12.40.13/16 brd 10.12.255.255 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::6c24:60ff:fe82:ed93/64 scope link
valid_lft forever preferred_lft forever
```
## 指定 VIP 偏好於哪台VM
```shell=
crm configure location cli-prefer-VIP-A1 VIP-A1 role=Started inf: peter-ha1
```
## 監控 vsftpd 服務
```shell=
systemctl start vsftpd
systemctl enable vsftpd
```
> ha cluster 建置 FTP 監控規則
```shell=
crm configure primitive vsftp-T1 systemd:vsftpd op start timeout=100 interval=0 op stop timeout=100 interval=0 op monitor timeout=100 interval=60 meta target-role=Stopped
crm configure primitive vsftp-T2 systemd:vsftpd op start timeout=100 interval=0 op stop timeout=100 interval=0 op monitor timeout=100 interval=60 meta target-role=Stopped
crm configure primitive vsftp-T3 systemd:vsftpd op start timeout=100 interval=0 op stop timeout=100 interval=0 op monitor timeout=100 interval=60 meta target-role=Stopped
```
> 建立 group
```shell=
crm configure group Group-1 VIP-A1 vsftp-T1 VIP-B1 meta target-role=Stopped
crm configure group Group-2 VIP-A2 vsftp-T2 VIP-B2 meta target-role=Stopped
crm configure group Group-3 VIP-A3 vsftp-T3 VIP-B3 meta target-role=Stopped
```
> 結果
```shell=
peter-ha1:~ # crm status
Stack: corosync
Current DC: peter-ha2 (version 1.1.24+20210811.f5abda0ee-3.30.3-1.1.24+20210811.f5abda0ee) - partition with quorum
Last updated: Wed Mar 1 17:29:10 2023
Last change: Wed Mar 1 17:28:57 2023 by root via cibadmin on peter-ha1
3 nodes configured
9 resource instances configured (9 DISABLED)
Online: [ peter-ha1 peter-ha2 peter-ha3 ]
Full list of resources:
Resource Group: Group-1
VIP-A1 (ocf::heartbeat:IPaddr2): Stopped (disabled)
vsftp-T1 (systemd:vsftpd): Stopped (disabled)
VIP-B1 (ocf::heartbeat:IPaddr2): Stopped (disabled)
Resource Group: Group-2
VIP-A2 (ocf::heartbeat:IPaddr2): Stopped (disabled)
vsftp-T2 (systemd:vsftpd): Stopped (disabled)
VIP-B2 (ocf::heartbeat:IPaddr2): Stopped (disabled)
Resource Group: Group-3
VIP-A3 (ocf::heartbeat:IPaddr2): Stopped (disabled)
vsftp-T3 (systemd:vsftpd): Stopped (disabled)
VIP-B3 (ocf::heartbeat:IPaddr2): Stopped (disabled)
```
> VIP Group 偏好位置設定
```shell=
crm configure location cli-prefer-Group-1 Group-1 role=Started inf: peter-ha1
crm configure location cli-prefer-Group-2 Group-2 role=Started inf: peter-ha2
crm configure location cli-prefer-Group-3 Group-3 role=Started inf: peter-ha3
```
> 啟動Group 設定
> ```shell=
> crm resource start Group-*
> ```
```shell=
peter-ha1:~ # crm resource start Group-*
Do you want to override 'target-role' for child resource VIP-A1 (y/n)? y
Do you want to override 'target-role' for child resource vsftp-T1 (y/n)? y
Do you want to override 'target-role' for child resource VIP-B1 (y/n)? y
Do you want to override 'target-role' for child resource VIP-A2 (y/n)? y
Do you want to override 'target-role' for child resource vsftp-T2 (y/n)? y
Do you want to override 'target-role' for child resource VIP-B2 (y/n)? y
Do you want to override 'target-role' for child resource VIP-A3 (y/n)? y
Do you want to override 'target-role' for child resource vsftp-T3 (y/n)? y
Do you want to override 'target-role' for child resource VIP-B3 (y/n)? y
peter-ha1:~ # crm status
Stack: corosync
Current DC: peter-ha2 (version 1.1.24+20210811.f5abda0ee-3.30.3-1.1.24+20210811.f5abda0ee) - partition with quorum
Last updated: Wed Mar 1 17:33:56 2023
Last change: Wed Mar 1 17:33:48 2023 by root via cibadmin on peter-ha1
3 nodes configured
9 resource instances configured
Online: [ peter-ha1 peter-ha2 peter-ha3 ]
Full list of resources:
Resource Group: Group-1
VIP-A1 (ocf::heartbeat:IPaddr2): Started peter-ha1
vsftp-T1 (systemd:vsftpd): Started peter-ha1
VIP-B1 (ocf::heartbeat:IPaddr2): Started peter-ha1
Resource Group: Group-2
VIP-A2 (ocf::heartbeat:IPaddr2): Started peter-ha2
vsftp-T2 (systemd:vsftpd): Started peter-ha2
VIP-B2 (ocf::heartbeat:IPaddr2): Started peter-ha2
Resource Group: Group-3
VIP-A3 (ocf::heartbeat:IPaddr2): Started peter-ha3
vsftp-T3 (systemd:vsftpd): Started peter-ha3
VIP-B3 (ocf::heartbeat:IPaddr2): Started peter-ha3
```
## 測試 HAE 運作
> 採用建置 Nginx + NFS 的方式
> Nginx 用 docker的方式實現
> NFS server 安裝方式
> ```shell=
> sudo zypper in yast2-nfs-server
> ```
```shell=
pr1:/home/rancher # sudo zypper in yast2-nfs-server
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following recommended package was automatically selected:
nfs-kernel-server
The following 3 NEW packages are going to be installed:
nfs-kernel-server yast2-nfs-common yast2-nfs-server
3 new packages to install.
Overall download size: 166.9 KiB. Already cached: 0 B. After the operation, additional 374.7 KiB will be used.
Continue? [y/n/v/...? shows all options] (y): y
Retrieving package nfs-kernel-server-2.1.1-150100.10.24.1.x86_64 (1/3), 119.6 KiB (277.6 KiB unpacked)
Retrieving package yast2-nfs-common-4.4.2-150400.1.9.noarch (2/3), 9.8 KiB ( 1.1 KiB unpacked)
Retrieving package yast2-nfs-server-4.4.2-150400.1.9.noarch (3/3), 37.5 KiB ( 96.1 KiB unpacked)
Checking for file conflicts: .................................................................................................................................................................................[done]
(1/3) Installing: nfs-kernel-server-2.1.1-150100.10.24.1.x86_64 ..............................................................................................................................................[done]
(2/3) Installing: yast2-nfs-common-4.4.2-150400.1.9.noarch ...................................................................................................................................................[done]
(3/3) Installing: yast2-nfs-server-4.4.2-150400.1.9.noarch ...................................................................................................................................................[done]
```
> 進入yast 後可以找到 NFS server 的選項進入後如下圖

> 將下圖的資料夾操作權限從 ro >> rw 即可
> 
> docker-compose Nginx (ha1-ha3)都要放
```yaml=
version: '3'
services:
nginx:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- /mnt:/usr/share/nginx/html
restart: always
```
> index.html (ha1-ha3)都要放,放到NFS的資料夾中
```html=
<html>
<head>
<meta charset="utf-8>
<title> ha1 ok!!!!!!!!</title>
</head>
</html>
```
> nginx.conf
```nginx.conf=
events {
worker_connections 8192;
}
http {
server {
listen 80;
server_name 192.168.11.40 192.168.11.43;
charset utf-8;
root /usr/share/nginx/html/ha1;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
server {
listen 81;
server_name 192.168.11.41 192.168.11.44;
charset utf-8;
root /usr/share/nginx/html/ha2;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
server {
listen 82;
server_name 192.168.11.42 192.168.11.45;
charset utf-8;
root /usr/share/nginx/html/ha3;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
}
```
## 驗證方法
> 下圖為測試範例,ha2正常時,於ha3 curl ha2可以正常取得位於nfs的資料
```shell=
watch -n 1 curl 192.168.11.44:81
```
> 
> 下圖為ha2關機測試範例,透過HAE的方式轉移ha2的VIP2到ha3,於ha3 curl ha2可以正常取得位於nfs的資料
```shell=
watch -n 1 curl 192.168.11.44:81
```
