# Red Hat OCP 4.2 + VMware + Nutanix Setup Guide with Nutanix CSI Driver (Part 1)
The purpose of this document is to guide you through how to install OCP 4.2 on Nutanix with VMware vSphere as the virtualization platform. This is a proof of concept enviroment therefore you should not use this in production since it has multiple single point of failures.
This guide is based on [OCP installation guide](https://docs.openshift.com/container-platform/4.2/installing/installing_vsphere/installing-vsphere.html) and [Nutanix CSI installation guide](https://portal.nutanix.com/#/page/docs/details?targetId=CSI-Volume-Driver:CSI-Volume-Driver).
I'll will split this guide into two parts since it's pretty long.
1^st^ Part - OCP on VMware installation
2^nd^ Part - Nutanix CSI driver installation
Below is the high level infrasturture.
![](https://i.imgur.com/hryzbW1.jpg)
### Hardware
* Nutanix NX-3050 with 4 nodes
### Software
* Nutanix AOS 5.11.2
* VMware vSphere 6.7.0
* Red Hat OCP 4.2
* Zevent Community 5.10.1 (load balancer)
* Windows 2016 (DNS)
* CentOS (OS for installer)
### Virtual Machines
* Nutanix CVM x 4 (one for each node)
* vCenter
* DNS server
* Load balancer
* OCP installer (this will also act as DHCP server and web server)
* OCP bootstrap (will be removed after installation)
* OCP master nodes x 3
* OCP worker nodes x 2
### Prerequisites
1. Nutanix hardware installed.
2. AOS and VMware vSphere installed.
3. VMware cluster is formed, datastores and network created.
4. DNS server installed (In this guide we'll use Windows as the DNS server since it's already inplaced.)
5. Linux VM installed as the OCP installer. (We'll use CentOS in this guide.)
---
## High Level Installation Procedure
We'll do the installation in the followng steps.
1. Download all OCP packages. (installer, command, ova)
2. Configure DNS records.
3. Install and configure load balancer.
4. Generate configuration files in the installer os (rhos-installer).
5. Install Apache (or any web server) and DHCP server in rhos-installer.
6. Deploy OCP OVA to vCenter.
7. Clone the OCP template to OCP infrastructure VM.
8. Configure DHCPD in rhos-installer to have fixed IP address for each VM.
9. Wait.
10. OCP installation complete.
11. Registry config???
12. Nutanix CSI installaion????
---
## Download OCP
For those who don't have subscriptions, you can go https://try.openshift.com/ to download all necessary softwares. Here's a list of what you need to download.
* **Installer** - We use openshift-install-linux-4.2.12.tar.gz in this guide.
* **Pull secret** - You will need this in the configuration yaml file. This key is to authenticate when downloading OCP container images from Red Hat.
* **Red Hat Enterprise Linux CoreOS (RHCOS)** - You need the OVA file for VMware installation. We use rhcos-4.2.0-x86_64-vmware.ova in this guide.
* **Command line tool** - This is the "oc" and "kubectl" command.
You can download all these into your local PC for later use.
---
## IP and DNS Information
This is all the DNS information you need to configure in your DNS server.
**Base Domain** - ntnxhk1.local
**Cluster Name** - ocp
| DNS Name | Record Type | IP |
|-------------|-------------|---------------|
| bootstrap | A | 172.16.67.201 |
| rhos-installer|A|172.16.67.200|
| lb01 | A | 172.16.67.2 |
| master1 | A | 172.16.67.11 |
| master2 | A | 172.16.67.12 |
| master3 | A | 172.16.67.13 |
| worker1 | A | 172.16.67.101 |
| worker2 | A | 172.16.67.102 |
| api.ocp | CNAME | lb |
| api-int.ocp | CNAME | lb |
| *.apps.ocp | A | 172.16.67.2 |
| etcd-0.ocp | CNAME | master1 |
| etcd-1.ocp | CNAME | master2 |
| etcd-2.ocp | CNAME | master3 |
Here's the DNS illustration.
![](https://i.imgur.com/g5OFgw8.jpg =569x486)
::: success
:orange_book: It's kinda stuipd to have a DNS diagram :sweat_smile: , but I think it's easier to understand this way.
:::
**Service Records**
| _service._proto.name. | TTL | class | SRV | priority | weight | port | target. |
| --------------------------------------- | ----- | ----- | --- | -------- | ------ | ---- | ------------------------- |
| _etcd-server-ssl._tcp.ocp.ntnxhk1.local | 86400 | IN | SRV | 0 | 10 | 2380 | etcd-0.ocp.ntnxhk1.local. |
| _etcd-server-ssl._tcp.ocp.ntnxhk1.local | 86400 | IN | SRV | 0 | 10 | 2380 | etcd-1.ocp.ntnxhk1.local. |
| _etcd-server-ssl._tcp.ocp.ntnxhk1.local | 86400 | IN | SRV | 0 |10 |2380 |etcd-2.ocp.ntnxhk1.local.|
---
## DNS Setup
Only Windows DNS configuration is covered here (because it's already installedin placed). Please Google yourself for Linux DNS setup.
I won't show your step by step on how to create A records and CNAMES. Below are the screenshots for Windows DNS configuration.
![](https://i.imgur.com/sQMG1Ld.jpg)
![](https://i.imgur.com/ezq8imz.jpg)
![](https://i.imgur.com/M80hSr3.jpg)
Some people may not know how to create SRV records in Windows. To create service records in Windows, select "ocp" sub domain and right click on an empty area, select "Other New Records", then choose "Service Location (SRV)".
Here's the service records creation sample.
![](https://i.imgur.com/HQB02Ok.png)
Below are all the service records.
![](https://i.imgur.com/7gpkWaF.png)
::: success
:orange_book: Check if you create the service recored under "ocp" subdomain instead of the base domain. "ocp" is the cluster name of in this guide.
:book: Please also note that all service record names start with underscore "_". The Protocal name also starts with underscore "_". It's defined in [RFC2782](https://tools.ietf.org/rfc/rfc2782.txt) .
:::
---
## Load Balancer Setup
In here we are using Zevenet as the load balancer because it has a web GUI. You can use whatever you feel comfortable. (e.g. HAProxy)
1. First go here to download the ISO image.
https://www.zevenet.com/products/community/
2. Then create a new VM and install it using this ISO. I'm using 2 vCPU, 2GB memory, 10GB HDD and a signle NIC here. Choose "Debian GNU/Linux 9 (64-bit)" as the Guest OS type.
3. After installation, go to the URL below to login to Zevenet. Use is root and the password is the one you define during the installation (see the IP list above, or you can use DNS)
https://172.16.67.2:444
4. We'll create 4 rules here.
| Rule Name | Ports | Backend Servers |
| -------- | -------- | -------- |
| OCP-API | 6443 | bootstrap, all master servers |
| OCP-int-API | 22623 | bootstrap, all master servers |
| HTTP | 80 | all worker nodes |
| HTTPS | 443 | all worker nodes |
Here's the illustration.
![](https://i.imgur.com/0QTSTmZ.jpg)
5. First go to LSLB > Farms, then click on "CREATE FARM". Then you'll see the screen below.
![](https://i.imgur.com/RMGbxlm.png)
6. Then create the load balancer frontend. We use eth0 as the Virtual IP for all services. For the sake of simplicity, we'll use "L4XNAT" as the Profile for all services.
![](https://i.imgur.com/OmRn4SD.png)
::: success
:orange_book: You can use "HTTP" profile if you want, but you'll have a more complex backend settings.
:::
When you're done, it should looks like below.
![](https://i.imgur.com/DXwZm9G.png)
7. Then click on the pencil button to edit the backend server for each service.
I choose check_tcp as the Health Checks method. You can define other options as you want. Below is the sample config for OCP-API.
![](https://i.imgur.com/KtFiOmv.png)
8. Configure this for the rest of the services. (There're total 4 services.)
---
## Installer Configuration
We will start preparing the OCP installation in the rhos-installer VM here. (Please forgive me that I'm all using root here.):grin:
### Generate SSH key
Generate an SSH key for the OCP cluster login if you don't have one.
1. Login to the rhos-installer and issue the following command to generate an ssh key withouth passward. In this example we use ~/.ssh/id_rsa as the default key location.
``` shell
[root@rhos-installer ~]# ssh-keygen -t rsa -b 4096 -N ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:wwx1an89dnofX5nrCiTMZDPI8ecrBpcy9jM3T1W8QkM root@rhos-installer
The key's randomart image is:
+---[RSA 4096]----+
| .. . E |
| ..+o . . |
| .oo* . o o|
| ==.* ... o|
| =S*.o..+o.|
| . *.o...o+o|
| * = ..+o|
| . = = .*|
| oooo|
+----[SHA256]-----+
[root@rhos-installer ~]#
```
2. Start the ssh-agent process as a background task.
``` shell
[root@rhos-installer ~]# eval "$(ssh-agent -s)"
Agent pid 8184
```
3. Add your SSH private key to the ssh-agent.
``` shell
[root@rhos-installer ~]# ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
```
### Services Setup in rhos-installer server
We'll have 2 services installed in rhos-installer server. One is DHCP server and the other is web server. We need DHCP server to provide IP for all OCP nodes during installation and we use the web server to host all ignition files for the CoreOS installation.
For those who don't know what's ignition in CoreOS. [Please go here.](https://coreos.com/ignition/docs/latest/what-is-ignition.html)
#### Install HTTP Server
1. Install the Apache package.
``` shell
yum install httpd -y
```
2. Enable firewall rule.
``` shell
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --reload
```
3. We'll use the default path and configuration, so just enable and start the httpd.
``` shell
systemctl enable httpd
systemctl start httpd
```
#### Install DHCP Server
We are using the rhos-installer as DHCP server here.
1. First we install the dhcp package.
``` shell
yum install dhcp
```
2. Tell which interface our DHCPD will be listening to.
``` shell
vi /etc/sysconfig/dhcpd
```
3. Add the line below to the end of the file.
```
DHCPDARGS=ens192
```
4. Enable DHCPD firewall rules.
``` bash
firewall-cmd --add-service=dhcp --permanent
firewall-cmd --reload
```
5. Finish for now. We'll configure this DHCPD after we clone all master nodes and worker nodes.
:::success
:orange_book: We want to give a fix IP for all OCP servers, i.e. bootstrap, master nodes and worker nodes. Therefore we need the MAC addresses before we can configure DHCPD. The way we can get the MAC address is after we deploy all those VMs.
:::
### Prepare the installer and command line tool
1. Upload the installer and command line tool to the rhos-installer VM. In this guide we're using openshift-install-linux-4.2.12.tar.gz and oc-4.2.0-linux.tar.gz.
2. Extract those files in your home directory.
```shell
tar zxvf oc-4.2.0-linux.tar.gz
tar zxvf openshift-install-linux-4.2.12.tar.gz
```
3. Copy the command line tools to /usr/local/bin. Make sure /usr/local/bin is in your path.
```shell
cp oc /usr/local/bin/
cp kubectl /usr/local/bin/
```
### OCP Installer Configuration - Create yaml file
1. Upload the installer to rhos-install server.
2. Extract the installer.
``` shell
[root@rhos-installer ~]# tar zxvf openshift-install-linux-4.2.12.tar.gz
README.md
openshift-install
[root@rhos-installer ~]#
```
3. Create an installation directory to store your required installation assets. We use ~/nutanix-ocp here.
``` shell
mkdir nutanix-ocp
```
4. Go to the installation director and create a file called "install-config.yaml". (This file name can't be change)
``` shell
cd nutanix-ocp
vi install-config.yaml
```
5. Below is the sample for install-config.yaml. There are few parameters you need to change.
* baseDomain
* metadata->name (this is your cluster name)
* everything inside vsphere block
* pullSecret (from Red Hat website)
* sshKey (which generate in the rhos-installer before, it should be the file ~/.ssh/id_rsa.pub)
``` yaml=
apiVersion: v1
baseDomain: ntnxhk1.local
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ocp
platform:
vsphere:
vcenter: 172.16.15.230
username: administrator@vsphere.local
password: xxxxxxx
datacenter: MyDataCenter
defaultDatastore: Default
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
```
::: success
:orange_book: You can see we are having 0 replicas for compute node. This doesn't mean we are not installing compute node (we'll say worker node here). It only means that we need to provision the VM ourselves, we're going to do it in vCenter anyway.
In an IPI environment, the installer can help you to provision the worker nodes.
For those who don't know what's IPI and UPI, here's the capture of the release notes from OCP 4.2.
***PI AND UPI***
*In OpenShift Container Platform 4.2, there are two primary installation experiences: Full stack automation (IPI) and pre-existing infrastructure (UPI).*
*With full stack automation, the installer controls all areas of the installation including infrastructure provisioning with an opinionated best practices deployment of OpenShift Container Platform. With pre-existing infrastructure deployments, administrators are responsible for creating and managing their own infrastructure allowing greater customization and operational flexibility.*
:::
::: success
:book: For minimum vSphere account privileges to use dynamic provisioning, [go here](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/vcp-roles.html).
:::
6. When create the manifest files using the installer, it'll delete this install-config.yaml file, so we better create a backup copy.
``` shell
cp install-config.yaml install-config.yaml.bak
```
:::danger
:warning: Remember to make backup copy. Your yaml file will be deleted after running the installer.
:::
### Creating the Kubernetes manifest and Ignition config files
Before we create the manifest and ignition files, please have a look at the warning below.
:::danger
:warning: The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must complete your cluster installation and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
:::
1. Go back to the directory where you put openshift-install. In our case it's the home directory.
2. Issue the following command to generate manifest file.
``` shell
[root@rhos-installer ~]# ./openshift-install create manifests --dir=nutanix-ocp
INFO Consuming "Install Config" from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
```
3. Modify the **manifests/cluster-scheduler-02-config.yml** Kubernetes manifest file to prevent Pods from being scheduled on the control plane machines.
4. Open the manifests/cluster-scheduler-02-config.yml file.
5. Locate the **mastersSchedulable** parameter and set its value to **false**.
6. Save and exit the file.
7. Issue following command to create the ignition files.
``` shell
[root@rhos-installer ~]# ./openshift-install create ignition-configs --dir=nutanix-ocp
INFO Consuming "Worker Machines" from target directory
INFO Consuming "Common Manifests" from target directory
INFO Consuming "Master Machines" from target directory
INFO Consuming "Openshift Manifests" from target directory
```
8. You can see these files below inside the nutanix-ocp installation directory.
``` shell
[root@rhos-installer ~]# cd nutanix-ocp/
[root@rhos-installer nutanix-ocp]# tree
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── install-config.yaml.bak
├── master.ign
├── metadata.json
└── worker.ign
1 directory, 7 files
```
### Prepare all ignition files
Here we do it a bit differently than the official guide. I'll do some explaination here.
Below is the diagram for the **official installation.**
![](https://i.imgur.com/wjpnoYG.jpg)
1. Deploy the OCP OVA to VM in vSphere.
2. Clone the OCP VM to different nodes.
3. During cloning, you have to convert your ignition file to a base64 metadata and put it inside the VM as the "Configuration Parameter".
::: warning
The problem is, the ignition file for bootstrap is too big. After it convert to base64, you can't put that in the VM as the "Configuration Parameter".
:sweat:
Therefore, in bootstrap, you create a small ignition file which only contains a URL which ontains the REAL ignition file (not base64 here), so the VM can download it when it boots up.
:::
Below is the installation method in this guide.
![](https://i.imgur.com/McbUn7M.jpg)
All base64 ignition date contains URL only. All nodes will download ignition files from the rhos-install VM according to their node type. This is much handy if you want to change the ignition file.
We'll prepare all ignition files here. The OVA deployment will be covered in the next step.
1. Copy all the bootstrap ignition files to the web server root directory.
``` shell
[root@rhos-installer ~]# cd nutanix-ocp/
[root@rhos-installer nutanix-ocp]# cp bootstrap.ign /var/www/html/
[root@rhos-installer nutanix-ocp]# cp master.ign /var/www/html/
[root@rhos-installer nutanix-ocp]# cp worker.ign /var/www/html/
```
2. Test the ignition file. You should get the file successfully.
``` shell
curl http://172.16.67.200/bootstrap.ign
```
3. Create a url only igniation file for bootstrap.
``` shell
[root@rhos-installer nutanix-ocp]# vi url-bootstrap.ign
```
4. Below is the sample append ignition file for bootstrap. We put our ignition file URL here.
``` ign=
{
"ignition": {
"config": {
"append": [
{
"source": "http://172.16.67.200/bootstrap.ign",
"verification": {}
}
]
},
"timeouts": {},
"version": "2.1.0"
},
"networkd": {},
"passwd": {},
"storage": {},
"systemd": {}
}
```
5. Convert the URL only bootstrap ignition to Base64 encoding.
``` shell
base64 -w0 url-bootstrap.ign > url-bootstrap.64
```
6. Repeat steps 3-5 for master.ign and worker.ign.
7. Ignition preperation is complete.
### Deploy Red Hat Enterprise Linux CoreOS (RHCOS) machines OVA in vSphere
1. Login to your vSphere console.
2. Go to "VMs and Templates"
3. Right click on the datacenter which you are going to deploy the OCP, select New Folder → New VM and Template Folder.
4. Enter "ocp" as the name of the folder. This folder should be the same as the cluster name you provide in the "install-config.yaml" before.
5. Right click on "ocp" folder and select Deploy OVF Template.
6. Choose the OCP ova you downloaded before as the template.
![](https://i.imgur.com/W1R1TKa.png)
7. We use "RHOS" as the virtual machine name. Select the "ocp" folder for the vm location.
![](https://i.imgur.com/JXhOsYh.png)
8. Select a valid computer resource for your vm.
9. Select "Default" as the datastore (which we had spcified it in install-config.yaml before) and "Thin Provisioning" as the disk format.
![](https://i.imgur.com/bllCefs.png)
10. Select the network the OCP cluster will use.
11. We'll use this template for all nodes, so we do not need to put anyting in the "Customize template".
![](https://i.imgur.com/ZNwRGgu.png)
12. Click Finish to start the template creation.
#### Deploy the OCP template to VM
1. Right click on the RHOS VM and select Clone → Clong to Virtual Machine.
2. Type in "bootstrap" as the virual machine name and select the folder to deploy.
![](https://i.imgur.com/gKxzhoJ.png)
3. Select a valid compute resource.
4. Select a valid storage.
5. Check "Customize this virtual machine's hardware" and click on Next.
![](https://i.imgur.com/kr91gO3.png)
6. Select minimum requirement for bootstrap VM. 4 vCPU, 16GB memory, 120GB hard disk.
![](https://i.imgur.com/uBIVbxN.png)
7. Click on the "VM Options" tab → Advanced, set the "Latency Sensitivity" to High.
![](https://i.imgur.com/Tuw7mkU.png)
8. On the same tab, clikc on "EDIT CONFIGURATION..." under Configuration Parameters.
![](https://i.imgur.com/CbA8nyQ.png)
9. Click "ADD CONFIGURATION PARAMS". We'll add 3 parameters here. Please note that we're putting the base64 file contents here, so you have to copy and paste it from the rhos-installer server.
| Name | Value |
| -------- | -------- |
| guestinfo.ignition.config.data | The content in the file "url-bootstrap.64" |
|guestinfo.ignition.config.data.encoding| base64|
|disk.EnableUUID| TRUE|
Below is the sample screenshot.
![](https://i.imgur.com/yuwmXEF.png)
10. Click on OK and deploy the bootstrap VM.
11. **DO NOT STARTUP** the VM yet. We'll do this after we configure DHCPD.
12. Repeat steps 1-10 for all the master nodes and worker nodes. The only different is the ignition file contents. Master nodes will use url-master.64 and worker nodes will use url-worker.64.
::: success
Tips: For master nodes and worker nodes, you can clone directly from the deployed VM with the same type so that you don't have to config each VM.
e.g. You first deply the 1st master node as shown above to "master1".
For "master2" and "master3", you just clone directly from "master1" since they all have the same config.
:::
#### Configure DHCP server
We'll get all the MAC address and configure fix DHCP IP for each node.
1. Go to vCenter console and click on bootstrap node.
2. On the Go to "Actions" > "Edit Settings".
3. Expend "Network adapter 1".
4. You'll see the MAC address as show below.
![](https://i.imgur.com/fpXpUjX.png)
5. Note down the MAC address.
6. Repeat steps 1-5 for all master nodes and worker nodes.
7. Go to rhos-installer VM and edit the DHCPD config.
``` shell
vi /etc/dhcp/dhcpd.conf
```
Below is the sample configuration.
``` shell=
option domain-name "ntnxhk1.local";
option domain-name-servers 172.16.19.11;
default-lease-time 600;
max-lease-time 7200;
authoritative;
log-facility local7;
#IP Subnet Declaration
subnet 172.16.67.0 netmask 255.255.255.0 {
option routers 172.16.67.4;
option subnet-mask 255.255.255.0;
option domain-search "ntnxhk1.local";
option domain-name "ntnxhk1.local";
option domain-name-servers 172.16.19.11;
option time-offset 28800; # Hong Kong Time
range 172.16.67.210 172.16.67.250;
}
#Assign Static IP Address to Host
host bootstrap {
option host-name "bootstrap.ntnxhk1.local";
hardware ethernet 00:50:56:A9:52:EA;
fixed-address 172.16.67.201;
}
host master1 {
option host-name "master1.ntnxhk1.local";
hardware ethernet 00:50:56:A9:2D:F0;
fixed-address 172.16.67.11;
}
host master2 {
option host-name "master2.ntnxhk1.local";
hardware ethernet 00:50:56:A9:A0:49;
fixed-address 172.16.67.12;
}
host master3 {
option host-name "master3.ntnxhk1.local";
hardware ethernet 00:50:56:A9:F4:A7;
fixed-address 172.16.67.13;
}
host worker1 {
option host-name "worker1.ntnxhk1.local";
hardware ethernet 00:50:56:A9:5D:BD;
fixed-address 172.16.67.101;
}
host worker2 {
option host-name "worker2.ntnxhk1.local";
hardware ethernet 00:50:56:A9:30:10;
fixed-address 172.16.67.102;
}
```
We create a zone and assign fixed IP to each node.
5. Start the DHCP daemon.
``` bash
systemctl restart dhcpd
```
### Start Installation
![](http://www.cdsns.com/uploads/img/201605/09/8406baadbbf4cdd12b7fd8cd3bc6b38d_source.gif =100x100)
Finally, we can start installation.
1. Login to vCenter and start up all nodes. (bootstrap, master nodes and worker nodes)
2. Login to rhos-installer VM and issue the following command to wait for the bootstrap to bootup.
``` shell
[root@rhos-installer ~]# ./openshift-install --dir=nutanix-ocp wait-for bootstrap-complete --log-level=info
INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp.ntnxhk1.local:6443...
```
Fingers crossed![](https://s3.amazonaws.com/pix.iemoji.com/images/emoji/apple/ios-12/256/crossed-fingers.png =50x50) and hope for the best. Go have a coffee, play some Switch games or take a nap. After a while, you should see the following.
```shell
INFO API v1.14.6+32dc4a0 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO It is now safe to remove the bootstrap resources
```
Congratulations!! :tada::tada::tada: You just got your OCP cluster up and running. Let's do some checking here.
:::danger
:book: Please turn on all VM within a resonable time. The problem I encounter before is that I turned on bootstrap only, then I turn on other master and worker nodes after a few hour. Then the installaion failed. You'll only see errors by logging in the bootstrap and it's hard to troubleshoot. As a result, I would recommand to start all VMs, say within 30 mins.
:::
Login to the OCP cluster and do some oc commands.
1. Shutdown the bootstap VM and remove the bootstrap from the load balancer since it may give you some issue you still use it as part of the cluster.
2. Export the **kubeadmin** credentials.
```shell
[root@rhos-installer ~]# export KUBECONFIG=/root/nutanix-ocp-03/auth/kubeconfig
```
3. Verify it's working.
```shell
[root@rhos-installer ~]# oc whoami
system:admin
```
4. Try to get all nodes.
```shell
[root@rhos-installer ~]# oc get nodes
NAME STATUS ROLES AGE VERSION
master1.ntnxhk1.local Ready master 49m v1.14.6+cebabbf4a
master2.ntnxhk1.local Ready master 50m v1.14.6+cebabbf4a
master3.ntnxhk1.local Ready master 49m v1.14.6+cebabbf4a
worker1.ntnxhk1.local Ready worker 48m v1.14.6+cebabbf4a
worker2.ntnxhk1.local Ready worker 46m v1.14.6+cebabbf4a
```
Everything looks good here. Now we need to setup the storage for registory.
### Image Registory Storage Configuration
You'll see image-registry is NOT avaliable if you issue the following command.
```shell
[root@rhos-installer ~]# watch -n5 oc get clusteroperators
Every 5.0s: oc get clusteroperators
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.2.12 True False False 42m
cloud-credential 4.2.12 True False False 62m
cluster-autoscaler 4.2.12 True False False 47m
console 4.2.12 True False False 45m
dns 4.2.12 True False False 61m
image-registry False False True 49m
ingress 4.2.12 True False False 48m
insights 4.2.12 True False False 62m
kube-apiserver 4.2.12 True False False 59m
kube-controller-manager 4.2.12 True False False 57m
kube-scheduler 4.2.12 True False False 56m
machine-api 4.2.12 True False False 62m
machine-config 4.2.12 True False False 57m
marketplace 4.2.12 True False False 48m
monitoring 4.2.12 True False False 45m
network 4.2.12 True False False 58m
node-tuning 4.2.12 True False False 48m
openshift-apiserver 4.2.12 True False False 49m
openshift-controller-manager 4.2.12 True False False 60m
openshift-samples 4.2.12 True False False 48m
operator-lifecycle-manager 4.2.12 True False False 60m
operator-lifecycle-manager-catalog 4.2.12 True False False 60m
operator-lifecycle-manager-packageserver 4.2.12 True False False 59m
service-ca 4.2.12 True False False 62m
service-catalog-apiserver 4.2.12 True False False 48m
service-catalog-controller-manager 4.2.12 True False False 49m
storage 4.2.12 True False False 49m
```
## Next Step
The deployment is not finished yet since we do not have any image registry.
In the next part of this guide, we'll install Nutanix CSI driver and configure the image registry.