# Manually installing OpenShift in IBM Tech Zone ephemeral environments
:::spoiler
Last Update: OCP 4.11 in Dec 2022
This guide is written and intended for use from Linux systems, but could easily be adapted for use by Windows or MacOS environments.
:::
Go here to request an environment
https://techzone.ibm.com/collection/on-premises-redhat-openshift-on-power-and-ibm-z-offerings
Do this to setup your VPN
```
nmcli connection add type vpn connection.id ibm-ENV-ID vpn-type openconnect vpn.data gateway=asa003b.centers.ihost.com ipv4.dns-search ihost.com
nmcli connection up ibm-ENV-ID
```
:::warning
Commands (create configs shown further below first before starting services...)
:::
:::spoiler
Let's save the id_rsa private and public keys we're given in the Project Kit page for the IBM Tech Zone environment we have reserved.
```bash=
$ vi ~/.ssh/id_rsa_ibmtechzone
<copy private key data from Project Kit web page>
$ vi ~/.ssh/id_rsa_ibmtechzone.pub
<copy public key data from Project Kit web page>
$ eval "$(ssh-agent -s)"
$ ssh-add ~/.ssh/id_rsa_ibmtechzone
```
Now you can easily ssh to the various systems given to you in IBM Tech Zone for this reservation.
For our work here we put our misc config and info files in `~/ocp`, and the `~/ocp/ocp_install` is where we will have our OpenShift install files and folders.
You are free to use whatever scheme you're comfortable with.
:::
```bash
$ ssh cecuser@bastion.p999.cecc.ihost.com
mkdir -p ~/ocp/ocp_install
cd ~/ocp
```
Now create what will be your dnsmasq.conf, haproxy.cfg, and grub.cfg files and put them here in `~/ocp` .
```
sudo dnf -y install dnsmasq haproxy httpd
sudo mkdir -p /var/lib/tftpboot
sudo cp ./dnsm_p999.conf /etc/dnsmasq.d/
sudo systemctl enable --now dnsmasq
sudo systemctl is-active dnsmasq
sudo grub2-mknetdir --net-directory=/var/lib/tftpboot --subdir=.
sudo wget --directory-prefix=/var/lib/tftpboot https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/latest/rhcos-installer-kernel-ppc64le
sudo wget --directory-prefix=/var/lib/tftpboot https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/latest/rhcos-installer-initramfs.ppc64le.img
sudo wget --directory-prefix=/var/www/html https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/latest/rhcos-metal.ppc64le.raw.gz
sudo wget --directory-prefix=/var/www/html https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/latest/rhcos-installer-rootfs.ppc64le.img
sudo cp ./p999_grub.cfg /var/lib/tftpboot/grub.cfg
sudo restorecon -vR /var/lib/tftpboot
sudo sed -i 's/^Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf
sudo systemctl enable --now httpd
sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.orig
sudo -e /etc/haproxy/haproxy.cfg
sudo cp ./p999_haproxy.cfg /etc/haproxy/haproxy.cfg
sudo setsebool -P haproxy_connect_any on
sudo systemctl enable --now haproxy
```
:::spoiler
We need our install-config.yaml now.
Copy from sample below and modify to your needs.
Keep a backup as the file gets consumed during install, in case you need to try again.
put it in ~/ocp/ocp_install folder
Let's get our OCP CLI tools.
:::
```
cd ~/ocp/ocp_install
<vi or cp your template/saved install-config.yaml file to ~/ocp/ocp_install>
wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable/openshift-install-linux.tar.gz
wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable/openshift-client-linux.tar.gz
tar xvzf openshift-install-linux.tar.gz
tar xvzf openshift-client-linux.tar.gz
sudo cp oc /usr/local/bin
```
:::spoiler
Now let's use our OCP install-config.yaml file and create our manifests.
:::
```
$ sudo -i
# cd /home/cecuser/ocp/ocp_install
# ./openshift-install create manifests
```
:::spoiler
Edit the manifests/cluster-scheduler-02-config.yml Kubernetes manifest file to prevent Pods
from being scheduled on the control plane machines by setting mastersSchedulable to false.
:::
```
$ sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml
```
It should look something like this below after you edit it.
:::danger
NOTE: This only applies to clusters with worker nodes.
Don't make the above change if the cluster setup is the minimum 3 nodes only.
:::
```
$ cat manifests/cluster-scheduler-02-config.yml
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
creationTimestamp: null
name: cluster
spec:
mastersSchedulable: false
policy:
name: ""
status: {}
```
Now let's create the ignition configs.
```
# cp install-config.yaml install-config.yaml.backup
# ./openshift-install create ignition-configs
# cp *.ign /var/www/html/
# chmod 0644 /var/www/html/*
# restorecon -vR /var/www/html/
```
:::danger
Warning: The certificates created for the ignition configs are temporary and only good for 24 hours! So you must complete your node installs and generate new longer term certificates for your cluster before then.
If not, then regenerate your ignition files and start again from there to buy you another 24 hours to install and setup your nodes.
:::
:::spoiler
After you've setup/copied the SSH keys on the HMC, then open 6 terminal
windows/tabs and SSH into the HMC on each one.
In our terminal app window/tab #1, we leave that logged into the bastion server.
To boot the bootstrap server, we'll use terminal windows/tab #2.
The "chsysstate" command on the HMC will boot the LPAR for us, and leave it waiting at the SMS boot menu. We'll then open the console in text mode and walk thru the SMS menu to boot from network.
We're doing it this way manually to let us watch the entire process and have much more visibility into what is going on.
:::
```
$ ssh u234234cecc-vhmc02.cecc.ihost.com
$ chsysstate -m Server-8408-E8E-SN21C490V -r lpar -o on -b sms -n \
p999-bootstr-55fa09ba-00000950
$ vtmenu
```
Select your system, then pick the corresponding number for the bootstrap LPAR.
You'll then have the text console open for that LPAR.
Make menu choices in this order: **5, 1, 4, 1, 1, 2, 1**
:::spoiler
Details on the SMS menu choices we're making:
Hit "5" to select boot options.
Enter "1" to select boot device.
Enter "4" for network.
Enter "1" for BOOTP.
Enter "1" to select the only network device.
Enter "2" for normal mode boot.
Enter "1" for yes, we're sure we want to exit.
:::
Go to terminal window/tab #3. We'll boot control plane 1 (master-0).
```
$ chsysstate -m Server-8408-E8E-SN21C490V -r lpar -o on -b sms -n \
p999-master--ff51b335-0000094a
$ vtmenu
Make menu choices in this order: 5, 1, 4, 1, 1, 2, 1
```
Go to terminal window/tab #4. We'll boot control plane 2 (master-1).
```
$ chsysstate -m Server-8408-E8E-SN21C490V -r lpar -o on -b sms -n \
p999-master--ba90f79a-0000094d
$ vtmenu
Make menu choices in this order: 5, 1, 4, 1, 1, 2, 1
```
Go to terminal window/tab #5. We'll boot control plane 3 (master-2).
```
$ chsysstate -m Server-8408-E8E-SN21C490V -r lpar -o on -b sms -n \
p999-master--f4c5cb21-0000094c
$ vtmenu
Make menu choices in this order: 5, 1, 4, 1, 1, 2, 1
```
Go to terminal window/tab #6. We'll boot worker-0.
```
$ chsysstate -m Server-8408-E8E-SN21C490V -r lpar -o on -b sms -n \
p999-worker--4be803ee-0000094b
$ vtmenu
Make menu choices in this order: 5, 1, 4, 1, 1, 2, 1
```
Go to terminal window/tab #7. We'll boot worker-1.
```
$ chsysstate -m Server-8408-E8E-SN21C490V -r lpar -o on -b sms -n \
p999-worker--88cca700-0000094e
$ vtmenu
Make menu choices in this order: 5, 1, 4, 1, 1, 2, 1
```
Monitor the bootstrap process:
To view different installation details, specify *warn*, *debug*, or *error* instead of *info*.
```
$ ./openshift-install wait-for bootstrap-complete --log-level=info
```
When the bootstrap server is booted into CoreOS, SSH into the bootstrap server:
```
ssh bootstrap.p999.cecc.ihost.com
journalctl -b -f -u release-image.service -u bootkube.service
```
Watch the progress once OpenShift is getting up and running.
On bastion server:
```
$ export KUBECONFIG=/home/cecuser/ocp/ocp_install/auth/kubeconfig
$ oc whoami
$ oc get nodes
$ oc get csr
```
To approve all pending CSRs, run the following command:
```
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
```
:::spoiler
Some Operators might not become available until some CSRs are approved.
You should check the CSRs a few times.
:::
```
$ watch -n5 oc get co
```
:::spoiler
Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials. For <installation_directory>, specify the path to the directory that you stored the installation files in.
:::
```
$ ./openshift-install --dir <installation_directory> wait-for install-complete
```
:::spoiler
When complete you can find your cluster info here, including the web console URL:
:::
```
oc describe cluster
```
:::spoiler
Create our simple NFS server.
:::
```
dnf install nfs-utils
### The IBM Tech Zone bastion servers already have 500GiB /export filesystem setup by default
mkdir -p /export/ocp
chmod 755 /export/ocp
chown -R nfsnobody:nfsnobody /export/ocp
semanage fcontext --add --type nfs_t "/export/ocp(/.*)?"
restorecon -R -v /export/ocp
echo "/export/ocp *(no_root_squash,async,rw)" >> /etc/export
firewall-cmd --add-service nfs \
--add-service mountd \
--add-service rpc-bind \
--permanent
firewall-cmd --reload
systemctl enable --now nfs-server
systemctl status nfs-server
exportfs -av
showmount -e localhost
```
:::spoiler
Simplified NFS storage for OpenShift
:::
```
cd ~/ocp/
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
cd nfs-subdir-external-provisioner
sed -i.backup 's/nfs-client/nfs-storage/g' ./deploy/class.yaml
sed -i.backup 's/namespace:.*/namespace: nfs-storage/g' ./deploy/rbac.yaml
sed -i.backup 's/namespace:.*/namespace: nfs-storage/g' ./deploy/deployment.yaml
vi ./deploy/deployment.yaml
Update NFS_SERVER and NFS_PATH values (4 edits required)
diff -U0 ./deploy/class.yaml{.backup,}
diff -U0 ./deploy/rbac.yaml{.backup,}
diff -U0 ./deploy/deployment.yaml{.backup,}
```
:::spoiler
$ diff -U0 ./deploy/deployment.yaml{.backup,}
--- ./deploy/deployment.yaml.backup 2022-12-07 22:49:33.373514506 -0500
+++ ./deploy/deployment.yaml 2022-12-07 22:54:35.999350348 -0500
@@ -8 +8 @@
- namespace: default
+ namespace: nfs-storage
@@ -32 +32 @@
- value: 10.3.243.101
+ value: 129.40.96.17
@@ -34 +34 @@
- value: /ifs/kubernetes
+ value: /export/nfs
@@ -38,2 +38,2 @@
- server: 10.3.243.101
- path: /ifs/kubernetes
+ server: 129.40.96.17
+ path: /export/nfs
:::
```
oc new-project nfs-storage
oc create -f ./deploy/rbac.yaml
oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:nfs-storage:nfs-client-provisioner
oc create -f ./deploy/deployment.yaml
oc create -f ./deploy/class.yaml
oc annotate storageclass/nfs-storage storageclass.kubernetes.io/is-default-class=true
```
:::spoiler
Update the Image Registry Operator's configuration for shared storage (ODF/NFS)
A new PVC will be created and assumes that a default Storage Class exists
:::
```
oc patch config.imageregistry.operator.openshift.io/cluster \
--type=merge -p '{"spec":{"rolloutStrategy":"RollingUpdate","replicas":2,"managementState":"Managed","storage":{"pvc":{"claim":""}}}}'
```
<span style="color:red">Left off here</span>
<mark style="background-color: #FFFF00">
<span style="color:green;font-weight:700;font-size:24px">
## Prep Work: Files and Info to prepare for installation steps above
</span>
</mark>
### dnsmasq.conf (`actually /etc/dnsmasq.d/somename.conf`)
Setup a DHCP/BOOTP/TFTP server using this config file
```
[cecuser@p999-bastion tmp]$ cat /etc/dnsmasq.d/p999.conf
# dnsmasq configuration created by John Call
port=0 # don't run a DNS server on this server
enable-tftp # let dnsmasq respond to tftp requests
tftp-root=/var/lib/tftpboot
dhcp-range=129.40.96.17,129.40.96.29 # only give out addresses in this range
dhcp-option=option:netmask,255.255.255.240 # the default is to use this host's netmask
dhcp-option=option:router,129.40.96.30 # don’t use this server as the default route
dhcp-option=option:ntp-server,129.40.44.7 # use IBM NTP servers
dhcp-option=option:dns-server,129.40.242.1 # use IBM DNS servers
#dhcp-option=option:ntp-server,129.40.44.7,129.40.44.8 # BOOTP packet out-of-space, didn't send netmask!
#dhcp-option=option:dns-server,129.40.242.1,129.40.242.2 # BOOTP packet out-of-space, didn't send netmask!
#dhcp-option=option:domain-search,p999.cecc.ihost.com # added to client's short-name lookups - BOOTP SPACE ISSUE
dhcp-boot=tag:bootp,powerpc-ieee1275/core.elf # lpars download this file first when they BOOTP
dhcp-host = fa:9b:2f:47:e3:20 , 129.40.96.18 , bootstrap # give this host a default lease duration (1 hour)
dhcp-host = fa:11:08:55:cb:20 , 129.40.96.19 , controlplane1 , infinite # give these hosts an "infinite" lease. RHCOS will convert to static IP
dhcp-host = fa:a8:7f:a0:99:20 , 129.40.96.20 , controlplane2 , infinite
dhcp-host = fa:ae:57:5d:2d:20 , 129.40.96.21 , controlplane3 , infinite
dhcp-host = fa:51:8d:74:dd:20 , 129.40.96.22 , worker1 , infinite
dhcp-host = fa:d1:47:57:6f:20 , 129.40.96.23 , worker2 , infinite
```
### grub.cfg
Make `/var/lib/tftpboot/grub.cfg` look like this
```
# Created by JCall@RedHat
# We can't get to the vtmenu to choose interactively
# so everything has to be done automatically :(
# run this command to create the bootfiles -- grub2-mknetdir --net-directory=/var/lib/tftpboot --subdir=""
# docs at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_an_advanced_rhel_installation/preparing-for-a-network-install_installing-rhel-as-an-experienced-user#configuring-a-network-server-for-ibm-power_preparing-for-a-network-install
set bootstrap=fa:88:55:89:e0:20
set controlplane1=fa:d4:b4:61:52:20
set controlplane2=fa:68:77:8e:d3:20
set controlplane3=fa:6c:8b:be:7b:20
set worker1=fa:42:b6:fc:81:20
set worker2=fa:9a:95:78:8c:20
set bastion_ip="129.40.96.17:8080"
set kernel=rhcos-installer-kernel-ppc64le
set initrd=rhcos-installer-initramfs.ppc64le.img
set instimage_url=http://${bastion_ip}/rhcos-metal.ppc64le.raw.gz
set rootfs_url=http://${bastion_ip}/rhcos-installer-rootfs.ppc64le.img
set install_dev="/dev/mapper/mpatha rd.multipath=default"
#set install_dev=/dev/sda
set kernel_args1="console=hvc0 ip=dhcp coreos.live.rootfs_url=${rootfs_url}"
set kernel_args2="coreos.inst.image_url=${instimage_url} coreos.inst.insecure coreos.inst.install_dev=${install_dev}"
set bootstrap_ign="coreos.inst.ignition_url=http://${bastion_ip}/bootstrap.ign"
set controller_ign="coreos.inst.ignition_url=http://${bastion_ip}/master.ign"
set worker_ign="coreos.inst.ignition_url=http://${bastion_ip}/worker.ign"
#BOOTSTRAP
menuentry 'Install bootstrap' {
echo "Loading kernel..."
linux ${kernel} ${kernel_args1} ${kernel_args2} ${bootstrap_ign}
echo "Loading initrd..."
initrd ${initrd}
}
#CONTROLLER
menuentry 'Install controller' {
echo "Loading kernel..."
linux ${kernel} ${kernel_args1} ${kernel_args2} ${controller_ign}
echo "Loading initrd..."
initrd ${initrd}
}
#WORKER
menuentry 'Install worker' {
echo "Loading kernel..."
linux ${kernel} ${kernel_args1} ${kernel_args2} ${worker_ign}
echo "Loading initrd..."
initrd ${initrd}
}
```
### haproxy.cfg
Create the HAproxy config file like this
```
[root@bastion ~]# cat /etc/haproxy/haproxy.cfg
# "global" and "defaults" sections are default from RPM
#---------------------------------------------------------------------
# OpenShift
#---------------------------------------------------------------------
listen stats
bind :9000
mode http
stats enable
stats uri /
stats refresh 5s
monitor-uri /healthz
listen openshift-api
bind *:6443
mode tcp
balance source
option tcplog
server bootstrap 129.40.96.18:6443 check
server control01 129.40.96.19:6443 check
server control02 129.40.96.20:6443 check
server control03 129.40.96.21:6443 check
listen openshift-machine-configs
bind *:22623
mode tcp
balance source
option tcplog
server bootstrap 129.40.96.18:22623 check
server control01 129.40.96.19:22623 check
server control02 129.40.96.20:22623 check
server control03 129.40.96.21:22623 check
listen ingress-http
bind *:80
mode tcp
balance source
option tcplog
server worker01 129.40.96.22:80 check
server worker02 129.40.96.23:80 check
listen ingress-https
bind *:443
mode tcp
balance source
option tcplog
server worker01 129.40.96.22:443 check
server worker02 129.40.96.23:443 check
```
### Boot and install the LPARS via BOOTP/network
:::success
I prefer the method below that simply activates (powers on) the LPAR. Then I can use `vtmenu` to press: (1), 5, 1, 4, 1, 1, 2, 1
so-easy-its-barely-an-inconvenience
:::
Boot the LPARS via BOOTP like this
```
lpar_netboot -i -f -t ent -s auto -d auto -m fa885589e020 p999-bootstra-e751f2fc-0000091d default_profile Server-9119-MHE-SN21AE927
rmvterm -p p999-bootstra-e751f2fc-0000091d -m Server-9119-MHE-SN21AE927
lpar_netboot -f -t ent -s auto -d auto -m fad4b4615220 p999-master-0-9d7db6bd-0000091a default_profile Server-9119-MHE-SN21AE927
lpar_netboot -f -t ent -s auto -d auto -m fa68778ed320 p999-master-1-a5434ad6-0000091f default_profile Server-9119-MHE-SN21AE927
lpar_netboot -f -t ent -s auto -d auto -m fa6c8bbe7b20 p999-master-2-097a54bf-0000091b default_profile Server-9119-MHE-SN21AE927
lpar_netboot -f -t ent -s auto -d auto -m fa42b6fc8120 p999-worker-0-82a405ad-0000091c default_profile Server-9119-MHE-SN21AE927
lpar_netboot -f -t ent -s auto -d auto -m fa9a95788c20 p999-worker-1-d5d12a9e-0000091e default_profile Server-9119-MHE-SN21AE927
```
### install-config.yaml
Here's a sample of the install-config.yaml file, modify to fit your needs.
```
# more information available via `openshift-install explain installconfig`
---
apiVersion: v1
metadata:
name: p999
baseDomain: cecc.ihost.com
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
architecture : ppc64le
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
architecture : ppc64le
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
fips: false
pullSecret: '<your pullSecret contents here>'
sshKey: '<your ssh public pub key contents here>'
```
## Optional: Set up SSH keys for HMC CLI easier access
Your bastion host should already have the pre-configured system generated ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub files.
:::info
NOTE: You will need to rerun ssh-agent and ssh-add commands if you exit your
terminal session and open a new one later, as they are bound to the session.
:::
```
[youruser@yourlinux ~]$ ssh -i ~/.ssh/id_rsa_ibmtechzone cecuser@p999-bastion.p999.cecc.ihost.com
```
For the HMC to setup easy SSH access, you first have to login the typical way, so
use the HMC login ID and passwords provided on your project details page:
```
[youruser@yourlinux ~]$ ssh u901234@cecc-vhmc07.cecc.ihost.com
Password:
u901234@cecc-vhmc02: ~ >
```
now use the mkauthkeys command on the HMC to create your SSH
syntax is this:
mkauthkeys -–add '<the contents of $HOME/.ssh/id_rsa.pub as a string>'
And here’s an example with sample id_rsa.pub file contents
```
u901234@cecc-vhmc02: ~ > mkauthkeys --add 'ssh-rsa AAAAB3N<bunch of chars>TIrCZfbNV+Lr cecuser@tz-638505771361658019fghijk123'
```
You could also do it in one line like this, with just the password prompt:
```
ssh hmcuser@hmchostname “mkauthkeys -–add '<the contents of $HOME/.ssh/id_rsa.pub as a string>'“
```
now log out again. Try to ssh directly to it using the key:
```
[youruser@yourlinux ~]$ ssh -i ~/.ssh/id_rsa_ibmtechzone u901234@cecc-vhmc07.cecc.ihost.com
```
You can open up a text virtual terminal console for any LPAR (VM) using the HMC “vtmenu” command.
```
$ vtmenu
```
You select the system you want to work with.
Then you’re given a new menu of the LPARs (VMs) you have access to.
Select the one you want. Verify the status is “Running”.
You’re then on the console.
Hit <Enter> to get the login prompt usually if there is an OS like RHEL already installed.
Some of the systems are just created VMs and booted.
These consoles are used for several purposes, such as doing a new OS install, for instance,
or console login to debug a network or other issue.
:::danger
WARNING: to close just this VTmenu login shell, DO NOT HIT “~ .” !!
You’re already SSH’d into the HMC, so if you hit “~ .” it closes your outermost SSH
shell to the HMC and that’s likely not what you want to do.
What you need to do is hit “~ ~ .”
(that’s 2 tildes and then dot, no spaces as shown here for clarity of reading).
This will only close the 2nd level shell/terminal session you’re in.
Then you can hit “y” to terminate the shell, and you’re back in the VTmenu.
Hit “Q” to quit or you can select another LPAR to work with.
:::
## Helpful HMC commands
```
MANAGED_SYS=$(ssh hmc lssyscfg -r sys | awk -F[,=] '{print $2}')
LPARS=$(ssh hmc lssyscfg -m $MANAGED_SYS -r lpar | awk -F[,=] '{print $2}')
### Shutdown the LPAR
chlparstate -m $MANAGED_SYS -p p999-worker--4be803ee-0000094b -o shutdown
### Boot the LPAR but stop in the SMS menu so we can manually BOOTP
chsysstate -m $MANAGED_SYS -r lpar -o on -n p999-worker--4be803ee-0000094b -b sms
### Forcefully release the vtmenu (held open by lpar_netboot)
rmvterm -p p999-bootstra-e751f2fc-0000091d -m $MANAGED_SYS
```
## To Watch DNSMASQ logs on bastion server
```
journalctl -flu dnsmasq
```
## If you want to try the old LPAR netboot command on the HMC, then you can try these instead.
```
lpar_netboot -i -f -t ent -s auto -d auto -m fa110855cb20 p999-master--ff51b335-0000094a default_profile Server-8408-E8E-SN21C490V
lpar_netboot -i -f -t ent -s auto -d auto -m faa87fa09920 p999-master--ba90f79a-0000094d default_profile Server-8408-E8E-SN21C490V
lpar_netboot -i -f -t ent -s auto -d auto -m faae575d2d20 p999-master--f4c5cb21-0000094c default_profile Server-8408-E8E-SN21C490V
lpar_netboot -i -f -t ent -s auto -d auto -m fa518d74dd20 p999-worker--4be803ee-0000094b default_profile Server-8408-E8E-SN21C490V
lpar_netboot -i -f -t ent -s auto -d auto -m fad147576f20 p999-worker--88cca700-0000094e default_profile Server-8408-E8E-SN21C490V
```