# Deploy RMT Server on K3S and synchronize it with Disconnected SUMA
<style>
.indent-title-1{
margin-left: 1em;
}
.indent-title-2{
margin-left: 2em;
}
.indent-title-3{
margin-left: 3em;
}
</style>
## Preface
<div class="indent-title-1">
本篇文章主要分兩個部分 :
1. 介紹如何在 K3S 中部屬 RMT Server
2. 如何設定 SUSE Manager 同步 RMT Server 匯出的 Packages Repositories
可以透過點擊以下目錄,選擇想看的內容,跳轉至特定章節
:::warning
:::spoiler {state="open"} 目錄
[TOC]
:::
</div>
## 1. 環境要求
- RMT Server
- cpu: 4 Core
- memory: 8 G
- 可以連上外網
- os iso: `SLE-15-SP4-Full-x86_64-QU3-Media1.iso`
- 已安裝好 SUSE Manager Server
- Version: 4.3.12
## 2. Application components 介紹
<div class="indent-title-1">
Each component of the RMT application is deployed in its own container. RMT consists of the following components:
### RMT server
<div class="indent-title-1">
Containerized version of the RMT application server with the ability to pass its configuration via Helm values. Storage is done on a volume that will be allocated on the Kubernetes cluster. You need to adjust the size of the storage depending on the number of repositories you need to mirror.
> ### What is RMT?
> RMT is stand for "Repository Mirroring Tool". It allows enterprise customers to optimize the management of SUSE Linux Enterprise software updates and subscription entitlements. It establishes a proxy system for SUSE® Customer Center with repositories and registration targets. This helps you to centrally manage software updates within a firewall on a per-system basis, while maintaining your corporate security policies and regulatory compliance.

</div>
### MariaDB
<div class="indent-title-1">
The database back-end for RMT. RMT creates the required database and tables at start-up, therefore no specific post-installation task is required. If passwords are not specified in the values.yaml file, they are generated automatically.
</div>
### Nginx
<div class="indent-title-1">
A Web server configured for RMT routes. Having a properly configured Web server allows you to target your Ingress traffic (for RMT) to this Nginx service directly. You do not need to configure Ingress for RMT specific paths handling, as Nginx is configured to take care of this itself.
</div>
</div>
## 3. OS 設定
裝 SLES 15 SP4 minimal 版
```bash!
# 1. 安裝必要套件
$ zypper mr -ea
$ zypper -n in iputils yast2-network sudo wget iptables bind-utils mkisofs yast2-ntp-client
$ sudo init 6
# 2. 設定當前使用者執行 sudo 免密碼
$ curl -s https://raw.githubusercontent.com/braveantony/bash-script/main/init.sh | bash
# 3. 設定網路
$ sudo yast2 lan edit id=0 ip=192.168.11.29 netmask=255.255.255.0
$ sudo ip route add default via 192.168.11.254
$ echo 'default 192.168.11.254 - eth0' | sudo tee /etc/sysconfig/network/ifroute-eth0
$ sudo yast2 dns edit nameserver1=192.168.11.85
$ sudo hostnamectl set-hostname rmt-server
# 4. 加一顆 500G Disk,之後要儲存 Repo 的資料
$ sudo fdisk /dev/sdb
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): e
Partition number (1-4, default 1):
First sector (2048-1048575999, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-1048575999, default 1048575999):
Created a new partition 1 of type 'Linux' and of size 500 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): 8e
Changed type of partition 'Linux' to 'Linux LVM'.
Command (m for help): p
Disk /dev/sdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd5f4ed82
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1048575999 1048573952 500G 8e Linux LVM
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
$ sudo /sbin/pvcreate /dev/sdb1
$ sudo /sbin/vgcreate rmt-data-vg /dev/sdb1
$ sudo /sbin/lvcreate -l 100%FREE -n rmt-data rmt-data-vg
# 5. Check LVM Settings
$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 system lvm2 a-- 69.99g 4.99g
/dev/sdb1 rmt-data-vg lvm2 a-- 500.00g 0
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
rmt-data-vg 1 1 0 wz--n- 500.00g 0
system 1 2 0 wz--n- 69.99g 4.99g
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
rmt-data rmt-data-vg -wi-a----- 500.00g
home system -wi-ao---- 25.00g
root system -wi-ao---- 40.00g
# 6. 將 LVM 格式化成 XFS 檔案系統
$ sudo /sbin/mkfs.xfs /dev/rmt-data-vg/rmt-data
# 7. 建立 RMT Server 儲存資料的目錄區
$ sudo mkdir -p /var/lib/rancher/k3s/storage
# 8. 設定開機自動將指定的 LV 掛載到指定的目錄區
$ echo '/dev/rmt-data-vg/rmt-data /var/lib/rancher/k3s/storage xfs defaults 0 0' | sudo tee -a /etc/fstab && sudo mount -a
# 9. 檢查是否掛載成功
$ lsblk /dev/sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 500G 0 disk
└─sdb1 8:17 0 500G 0 part
└─rmt--data--vg-rmt--data 254:2 0 500G 0 lvm /var/lib/rancher/k3s/storage
```
## 4. Install K3S Server
```bash!
# 1. install k3s
$ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.0+k3s1 \
K3S_KUBECONFIG_MODE="644" K3S_TOKEN=SECRET sh -s - server --cluster-init
# 2. 安裝 Helm
$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# 3. Setup kubeconfig
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config && \
sudo chmod 600 ~/.kube/config && \
sudo chown $(id -u):$(id -g) ~/.kube/config
# 4. Check pods status in all namespace
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6799fbcd5-bqp26 1/1 Running 0 61m
kube-system helm-install-traefik-2m4xg 0/1 Completed 1 61m
kube-system helm-install-traefik-crd-kjk2c 0/1 Completed 0 61m
kube-system local-path-provisioner-6c86858495-ccggc 1/1 Running 0 61m
kube-system metrics-server-54fd9b65b-h87mq 1/1 Running 0 61m
kube-system svclb-traefik-031297b9-zbld6 2/2 Running 0 61m
kube-system traefik-7d5f6474df-rfjn9 1/1 Running 0 61m
```
## 5. Install RMT Server
```bash!
# 1. 產生 values.yaml
## 要修改 username 和 password (SCC 的 organization credentials)
$ cat << EOF > rmt-config.yaml
---
app:
storage:
class: local-path
scc:
username: "UXXXXXXX"
password: "PASSXXXX"
products_enable:
- SLES/15.3/x86_64
- sle-module-python2/15.3/x86_64
products_disable:
- sle-module-legacy/15.3/x86_64
- sle-module-cap-tools/15.3/x86_64
db:
storage:
class: local-path
ingress:
enabled: true
hosts:
- host: rmt-server.example.com
paths:
- path: "/"
pathType: Prefix
tls:
- secretName: rmt-cert
hosts:
- rmt-server.example.com
EOF
# 2. check DNS A Record
$ dig @192.168.11.85 rmt-server.example.com +short
192.168.11.29
# 3. 開始安裝
$ helm install rmtsle oci://registry.suse.com/suse/rmt-helm -f rmt-config.yaml
# 4. 檢查部屬狀態
$ kubectl get pods,svc,ing,cronjob
NAME READY STATUS RESTARTS AGE
pod/rmtsle-app-59fcc7687d-9rqzq 1/1 Running 0 2m4s
pod/rmtsle-db-756d9dfff5-mltgk 1/1 Running 0 2m4s
pod/rmtsle-front-7ccf9d5547-sc56h 1/1 Running 0 2m4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 28m
service/rmtsle-app ClusterIP 10.43.149.216 <none> 4224/TCP 2m5s
service/rmtsle-db ClusterIP 10.43.92.212 <none> 3306/TCP 2m5s
service/rmtsle-front ClusterIP 10.43.165.131 <none> 80/TCP 2m5s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/rmtsle traefik rmt-server.example.com 192.168.11.29 80, 443 2m4s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/rmtsle-app-mirror 30 2 * * * False 0 <none> 2m4s
cronjob.batch/rmtsle-app-sync * 1 * * * False 0 <none> 2m4s
```
> 預設每小時會執行 `rmt-cli sync`,每天的凌晨 2 點 30 分鐘執行 `rmt-cli mirror`
## 6. 下載 Packages Repositories
```bash!
# 1. Enable SLES for sap 12 sp5
$ rmt_server=$(kubectl get pods --no-headers -l component=app -o custom-columns=Name:.metadata.name)
# 2. 到 SUMA Web UI 確認要下載的 repos
## 以下載 SUSE Linux Enterprise Server for SAP Applications 12 SP5 x86_64 這個產品的所有必要的套件儲存庫為例
## 連到 SUSEManager Web UI ,點選左側選單的 Admin > Setup Wizard > Products > SUSE Linux Enterprise Server for SAP Applications 12 SP5 x86_64 > Show Product's channels (右邊 4 條槓的藍色按鈕),會看到以下輸出
Mandatory Channels
* SLE-12-SP5-SAP-Updates for x86_64 not synced
sle-12-sp5-sap-updates-x86_64
* SLE-HA12-SP5-Pool for x86_64 SAP not synced
sle-ha12-sp5-pool-x86_64-sap
* SLE-HA12-SP5-Updates for x86_64 SAP not synced
sle-ha12-sp5-updates-x86_64-sap
* SLE-Manager-Tools12-Pool for x86_64 SAP SP5 not synced
sle-manager-tools12-pool-x86_64-sap-sp5
* SLE-Manager-Tools12-Updates for x86_64 SAP SP5 not synced
sle-manager-tools12-updates-x86_64-sap-sp5
* SLE12-SP5-SAP-Pool for x86_64 not synced
sle12-sp5-sap-pool-x86_64
* SLES12-SP5-Pool for x86_64 SAP not synced
sles12-sp5-pool-x86_64-sap
* SLES12-SP5-Updates for x86_64 SAP not synced
sles12-sp5-updates-x86_64-sap
Optional Channels
SLE-12-SP5-SAP-Debuginfo-Updates for x86_64 not synced
sle-12-sp5-sap-debuginfo-updates-x86_64
...以下省略
# 3. 回到 RMT Server 的終端機
# 確認 SUSE Linux Enterprise Server for SAP Applications 12 SP5 x86_64 需要的 repos ID 分別是多少
$ kubectl exec "$rmt_server" -c rmt-app -- rmt-cli repos list --csv --all | grep -iE "SLE-12-SP5-SAP-Updates|SLE-HA12-SP5-Pool|SLE-HA12-SP5-Updates|SLE-Manager-Tools12-Pool|SLE-Manager-Tools12-Updates|SLE12-SP5-SAP-Pool|SLES12-SP5-Pool|SLES12-SP5-Updates" | grep -vE "aarch64|ppc64le|s390x" | cut -d "," -f 1 | tr -s "\n" " "
3686 3691 3687 1734 1732 3690 3652 3647
## 4. Enable repos
$ kubectl exec "$rmt_server" -c rmt-app -- rmt-cli repos enable 3686 3691 3687 1734 1732 3690 3652 3647
Repository by ID 3686 successfully enabled.
Repository by ID 3691 successfully enabled.
Repository by ID 3687 successfully enabled.
Repository by ID 1734 successfully enabled.
Repository by ID 1732 successfully enabled.
Repository by ID 3690 successfully enabled.
Repository by ID 3652 successfully enabled.
Repository by ID 3647 successfully enabled.
## 5. Double Check
$ kubectl exec "$rmt_server" -c rmt-app -- rmt-cli repos list --csv
ID,Product,Description,Mandatory?,Mirror?,Last mirrored
3686,SLE-12-SP5-SAP-Updates,SLE-12-SP5-SAP-Updates for sle-12-x86_64,true,true,2024-05-13 06:06:04 UTC
3691,SLE-HA12-SP5-Pool,SLE-HA12-SP5-Pool for sle-12-x86_64,true,true,2024-05-13 06:05:56 UTC
3687,SLE-HA12-SP5-Updates,SLE-HA12-SP5-Updates for sle-12-x86_64,true,true,2024-05-13 06:05:42 UTC
1734,SLE-Manager-Tools12-Pool,SLE-Manager-Tools12-Pool for sle-12-x86_64,true,true,
1732,SLE-Manager-Tools12-Updates,SLE-Manager-Tools12-Updates for sle-12-x86_64,true,true,
3690,SLE12-SP5-SAP-Pool,SLE12-SP5-SAP-Pool for sle-12-x86_64,true,true,2024-05-13 06:06:24 UTC
3652,SLES12-SP5-Pool,SLES12-SP5-Pool for sle-12-x86_64,true,true,2024-05-13 06:05:33 UTC
3647,SLES12-SP5-Updates,SLES12-SP5-Updates for sle-12-x86_64,true,true,2024-05-13 06:05:11 UTC
## 6. Manually Synchronize database with SUSE Customer Center
$ kubectl create job --from=cronjob/rmtsle-app-sync sync-0513
$ kubectl logs -l job-name=sync-0513 -f
Executing: /usr/share/rmt/bin/rmt-cli sync
I, [2024-05-13T06:46:27.585181 #1] INFO -- : Downloading data from SCC
I, [2024-05-13T06:46:27.585291 #1] INFO -- : Updating products
I, [2024-05-13T06:48:58.596235 #1] INFO -- : Updating repositories
I, [2024-05-13T06:49:02.173838 #1] INFO -- : Updating subscriptions
## 7. Manually Mirror repositories
$ kubectl create job --from=cronjob/rmtsle-app-mirror mirror-sles12-sp5-sap
$ kubectl logs -l job-name=mirror-sles12-sp5-sap -f
### Check 完整的 log
$ kubectl logs -l job-name=mirror-sles12-sp5-sap --tail=-1
```
## 7. Export Data
```bash!
# 1. Create a export folder
$ kubectl exec "$rmt_server" -c rmt-app -- mkdir /var/lib/rmt/export
# 2. 更改目錄權限
$ kubectl exec "$rmt_server" -c rmt-app -- chown 499:486 /var/lib/rmt/export/
# 3. Export the SCC data
$ kubectl exec "$rmt_server" -c rmt-app -- rmt-cli export data /var/lib/rmt/export/
I, [2024-05-03T01:24:50.158261 #343] INFO -- : Exporting data from SCC to /var/lib/rmt/export
I, [2024-05-03T01:24:50.158339 #343] INFO -- : Exporting products
I, [2024-05-03T01:25:15.226819 #343] INFO -- : Exporting repositories
I, [2024-05-03T01:25:23.412012 #343] INFO -- : Exporting subscriptions
I, [2024-05-03T01:25:23.732233 #343] INFO -- : Exporting orders
# 4. 檢查是否符合預期
$ rmt_path=$(kubectl get pv $(kubectl get pvc rmtsle-app --no-headers -o custom-columns=volume:.spec.volumeName) --no-headers -o custom-columns=path:.spec.local.path)
$ sudo ls -l "$rmt_path"/export
total 15284
-rw-r--r-- 1 messagebus render 2 May 3 09:25 organizations_orders.json
-rw-r--r-- 1 messagebus render 3816974 May 3 09:24 organizations_products.json
-rw-r--r-- 1 messagebus render 10976002 May 3 09:25 organizations_products_unscoped.json
-rw-r--r-- 1 messagebus render 838455 May 3 09:25 organizations_repositories.json
-rw-r--r-- 1 messagebus render 11590 May 3 09:25 organizations_subscriptions.json
# 5. Export the enabled repositories settings and packages
$ kubectl exec "$rmt_server" -c rmt-app -- bash -c "rmt-cli export settings /var/lib/rmt/export/ && rmt-cli export repos /var/lib/rmt/export/"
# 6. 檢查是否符合預期
$ sudo ls -l "$rmt_path"/export
total 15288
-rw-r--r-- 1 messagebus render 2 May 3 09:25 organizations_orders.json
-rw-r--r-- 1 messagebus render 3816974 May 3 09:24 organizations_products.json
-rw-r--r-- 1 messagebus render 10976002 May 3 09:25 organizations_products_unscoped.json
-rw-r--r-- 1 messagebus render 838455 May 3 09:25 organizations_repositories.json
-rw-r--r-- 1 messagebus render 11590 May 3 09:25 organizations_subscriptions.json
-rw-r--r-- 1 messagebus render 2397 May 3 09:33 repos.json
drwxr-xr-x 1 messagebus render 34 May 3 09:33 suma
drwxr-xr-x 1 messagebus render 30 May 3 09:33 SUSE
$ sudo du -sh "$rmt_path"/export/
56G /var/lib/rancher/k3s/storage/pvc-0d5da7cf-0b4e-45c5-901f-2a48f1f3b830_default_rmtsle-app/export/
$ sudo mkdir "$rmt_path"/backup
$ kubectl exec "$rmt_server" -c rmt-app -- chown 499:486 /var/lib/rmt/backup
$ sudo mkisofs -joliet-long -J -R -o "$rmt_path"/backup/SLES-12-SP5-SAP_v2.iso "$rmt_path"/export/
...
99.99% done, estimate finish Mon May 6 14:45:21 2024
Total translation table size: 0
Total rockridge attributes bytes: 0
Total directory bytes: 1189888
Path table size(bytes): 908
Max brk space used 1860000
28828581 extents written (56305 MB)
$ sudo ls -lh "$rmt_path"/backup/SLES-12-SP5-SAP.iso
-rw-r--r-- 1 root root 55G May 6 14:45 /var/lib/rancher/k3s/storage/pvc-0d5da7cf-0b4e-45c5-901f-2a48f1f3b830_default_rmtsle-app/backup/SLES-12-SP5-SAP.iso
> 將 ISO 檔放到 SUSE Manager
```
```bash!
# Disable RMT Server
$ kubectl scale deploy rmtsle-app rmtsle-db rmtsle-front --replicas=0
$ kubectl patch cronjobs rmtsle-app-mirror rmtsle-app-sync -p '{"spec" : {"suspend" : true }}'
# Enable RMT Server
$ kubectl scale deploy rmtsle-app rmtsle-db rmtsle-front --replicas=1
$ kubectl patch cronjobs rmtsle-app-mirror rmtsle-app-sync -p '{"spec" : {"suspend" : false }}'
```
# Set up SUSE Manager Synchronization with RMT Server
## 1. 設定硬碟
```bash!
# 1. 先新增一顆 200G 的硬碟
# 2. 切 Linux LVM Type 的 Partition
$ fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x226d5455
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): e
Partition number (1-4, default 1):
First sector (2048-419430399, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-419430399, default 419430399):
Created a new partition 1 of type 'Extended' and of size 200 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): 8e
Changed type of partition 'Extended' to 'Linux LVM'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
# 做 LVM
$ pvcreate /dev/sdb1
$ vgcreate rmt-data-vg /dev/sdb1
$ lvcreate -l 100%FREE -n rmt-data rmt-data-vg
$ pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 system lvm2 a-- 1.49t 0
/dev/sdb1 rmt-data-vg lvm2 a-- 200.00g 0
$ vgs
VG #PV #LV #SN Attr VSize VFree
rmt-data-vg 1 1 0 wz--n- 200.00g 0
system 1 6 0 wz--n- 1.49t 0
$ lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
rmt-data rmt-data-vg -wi-a----- 200.00g
root system -wi-ao---- 100.00g
srv system -wi-ao---- 80.00g
swap system -wi-ao---- 2.00g
var_cache system -wi-ao---- 10.00g
var_lib_pgsql system -wi-ao---- 60.00g
var_spacewalk system -wi-ao---- 1.24t
$ mkfs.xfs /dev/rmt-data-vg/rmt-data
$ mkdir -p /var/lib/rmt-data
$ echo '/dev/rmt-data-vg/rmt-data /var/lib/rmt-data xfs defaults 0 0' | sudo tee -a /etc/fstab && sudo mount -a
$ lsblk /dev/sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 200G 0 disk
└─sdb1 8:17 0 200G 0 part
└─rmt--data--vg-rmt--data 254:6 0 200G 0 lvm /var/lib/rmt-data
```
## 2. 掛 ISO 檔
```bash!
$ mkdir /var/lib/rmt-data/{mirrorrepo,iso}
$ mount -o loop /var/lib/rmt-data/SLES-12-SP5-SAP.iso /var/lib/rmt-data/iso
$ cd /var/lib/rmt-data/ && \
ls -l iso/
total 15284
-rw-r--r-- 1 messagebus render 2 May 6 11:38 organizations_orders.json
-rw-r--r-- 1 messagebus render 3816974 May 6 11:38 organizations_products.json
-rw-r--r-- 1 messagebus render 10976002 May 6 11:38 organizations_products_unscoped.json
-rw-r--r-- 1 messagebus render 838455 May 6 11:38 organizations_repositories.json
-rw-r--r-- 1 messagebus render 11590 May 6 11:38 organizations_subscriptions.json
-rw-r--r-- 1 messagebus render 1423 May 6 13:10 repos.json
drwxr-xr-x 2 messagebus render 2048 May 6 11:43 suma
drwxr-xr-x 4 messagebus render 2048 May 6 12:08 SUSE
$ rsync -avzh --progress --log-file=/tmp/sync-1.log iso/ mirrorrepo/
...
sent 51.76G bytes received 427.19K bytes 43.26M bytes/sec
total size is 59.02G speedup is 1.14
$ chown -R root:root mirrorrepo/
$ ls -lh mirrorrepo/
total 15M
-rw-r--r-- 1 root root 2 May 6 11:38 organizations_orders.json
-rw-r--r-- 1 root root 3.7M May 6 11:38 organizations_products.json
-rw-r--r-- 1 root root 11M May 6 11:38 organizations_products_unscoped.json
-rw-r--r-- 1 root root 819K May 6 11:38 organizations_repositories.json
-rw-r--r-- 1 root root 12K May 6 11:38 organizations_subscriptions.json
-rw-r--r-- 1 root root 1.4K May 6 13:10 repos.json
drwxr-xr-x 2 root root 31 May 6 11:43 suma
drwxr-xr-x 4 root root 37 May 6 12:08 SUSE
```
## 3. 修改設定檔 `/etc/rhn/rhn.conf`
```bash!
$ echo "server.susemanager.fromdir = /var/lib/rmt-data/mirrorrepo/" >> /etc/rhn/rhn.conf
```
## 4. Restart the Tomcat service
```bash!
$ systemctl restart tomcat
```
## 5. Refresh the local data:
```bash!
$ mgr-sync -v -d 3 refresh
```
## 6. List Chaneel
```bash!
$ mgr-sync list channels
Available Channels:
Status:
- [I] - channel is installed
- [ ] - channel is not installed, but is available
- [U] - channel is unavailable
[ ] EL9-Pool for x86_64 RHEL and Liberty 9 Base [el9-pool-x86_64]
[ ] RHEL5-Pool for i386 RHEL5 Base i386 [rhel5-pool-i386]
[ ] RHEL5-Pool for x86_64 RHEL5 Base x86_64 [rhel5-pool-x86_64]
[ ] RHEL6-Pool for i386 RHEL6 Base i386 [rhel6-pool-i386]
[ ] RHEL6-Pool for x86_64 RHEL6 Base x86_64 [rhel6-pool-x86_64]
[ ] RHEL7-Pool for x86_64 RHEL7 Base x86_64 [rhel7-pool-x86_64]
[ ] RHEL8-Pool for x86_64 RHEL and Liberty 8 Base [rhel8-pool-x86_64]
[ ] SLE12-SP5-SAP-Pool for x86_64 SUSE Linux Enterprise Server for SAP Applications 12 SP5 x86_64 [sle12-sp5-sap-pool-x86_64]
[ ] SLES12-SP5-Pool for x86_64 SUSE Linux Enterprise Server 12 SP5 x86_64 [sles12-sp5-pool-x86_64]
[ ] ubuntu-16.04-pool for amd64 Ubuntu 16.04 [ubuntu-16.04-pool-amd64]
[ ] ubuntu-18.04-pool for amd64 Ubuntu 18.04 [ubuntu-18.04-pool-amd64]
```
## 7. Perform a synchronization
```bash!
# 1. 同步 sle12-sp5-sap
$ mgr-sync add channel sle12-sp5-sap-pool-x86_64
Added 'sle12-sp5-sap-pool-x86_64' channel
Scheduling reposync for following channels:
- sle12-sp5-sap-pool-x86_64
# 2. 同步 sles12-sp5-pool
$ mgr-sync add channel sles12-sp5-pool-x86_64
Added 'sles12-sp5-pool-x86_64' channel
Scheduling reposync for following channels:
- sles12-sp5-pool-x86_64
```
> Log 位置在 `/var/log/rhn/reposync` 目錄區
### TroubleShooting
錯誤訊息如下 :
```bash!
Refreshing Channel families [DONE]
Refreshing SUSE products [DONE]
Refreshing SUSE repositories [FAIL]
Error: <Fault -1: 'redstone.xmlrpc.XmlRpcFault: unhandled internal exception: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.redhat.rhn.domain.scc.SCCRepositoryTokenAuth#10229]'>
```
> 如果遇到以上錯誤訊息,要先刪 credentials
```bash!
$ mgr-sync delete credentials
Credentials:
01) 9596e8825a (primary)
Enter credentials number (1-1): 1
Really delete credentials '9596e8825a'? (y/n): y
Successfully deleted credentials: 9596e8825a
```
> 然後在重新同步
```bash!
$ mgr-sync -v -d 3 refresh
Refreshing Channel families [DONE]
Refreshing SUSE products [DONE]
Refreshing SUSE repositories [DONE]
Refreshing Subscriptions [DONE]
```
# 參考資料
- [Deploying RMT on top of the Kubernetes cluster - SUSE Docs](https://documentation.suse.com/sles/15-SP4/html/SLES-all/cha-rmt-installation.html#sec-rmt-deploy-kubernetes)
- [Disconnected Setup - SUSE Docs](https://documentation.suse.com/suma/4.3/en/suse-manager/administration/disconnected-setup.html)