TODO: content of blog post
This use case provides details of how to deploy an OpenShift cluster using external
platform type in Oracle Cloud Infrastructure (OCI), deploying providers' Cloud Controller
Manager (CCM).
The guide derives from "Installing a cluster on any platform" documentation, adapted to the external platform type.
The steps provide low-level details to customize Oracle's components like CCM.
This guide is organized into three sections:
If you are exploring how to customize the OpenShift platform external type with CCM, without
deploying the whole cluster creation in OCI, feel free to jump to Section 2
.
Section 1 and 3 are mostly OCI-specific, and it is valuable for readers exploring
in detail the OCI manual deployment.
!!! tip "Automation options"
The goal of this document is to provide details of the platform external type,
without focusing on the infrastructure automation. The tool used to
provision the resources described in this guide is the Oracle Cloud CLI.
Alternatively, the automation can be achieved using official
[Ansible](https://docs.oracle.com/en-us/iaas/tools/oci-ansible-collection/4.25.0/index.html)
or [Terraform](https://registry.terraform.io/providers/oracle/oci/latest/docs)
modules.
!!! danger "Unsupported Document"
This guide is created only for Red Hat partners or providers aiming to extend
external components in OpenShift, and should not be used as an official or
supported OpenShift installation method.
Please review the product documentation to get the supported path.
Table of Contents
Download the OpenShift CLI and installer:
!!! tip "Credentials"
The Red Hat Cloud credential (Pull secret)
is required to pull from the repository quay.io/openshift-release-dev/ocp-release
.
Alternatively, you can provide the option `-a /path/to/pull-secret.json`.
The examples in this document export the path of the pull secret to the environment variable `PULL_SECRET_FILE`.
!!! warning "Supported OpenShift versions"
The Platform External is available in OpenShift 4.14+.
oc adm release extract -a $PULL_SECRET_FILE \
--tools "quay.io/openshift-release-dev/ocp-release:4.14.0-rc.7-x86_64"
tar xvfz openshift-client-*.tar.gz
tar xvfz openshift-install-*.tar.gz
Move the binaries openshift-install
and oc
to any directory exported in the $PATH
.
The OCI CLI is used in this guide to create infrastructure resources in the OCI.
python3.9 -m venv ./venv-oci && source ./venv-oci/bin/activate
pip install oci-cli
Download jq: used to filter the results returned by CLI
Download yq: used to patch the yaml
manifests.
wget -O yq "https://github.com/mikefarah/yq/releases/download/v4.34.1/yq_linux_amd64"
chmod u+x yq
wget -O butane "https://github.com/coreos/butane/releases/download/v0.18.0/butane-x86_64-unknown-linux-gnu"
chmod u+x butane
A user with administrator access was used to create the OpenShift cluster described
in this use case.
The cluster was created in a dedicated compartment in Oracle Cloud Infrastructure,
it allows the creation of custom policies for components like Cloud Controller Manager.
The following steps describe how to create an compartment from any nested level,
and create predefined tags used to apply policies to the compartment:
# A new compartment will be created as a child of this:
PARENT_COMPARTMENT_ID="<ocid1.compartment.oc1...>"
# Cluster Name
CLUSTER_NAME="ocp-oci-demo"
# DNS information
BASE_DOMAIN=example.com
DNS_COMPARTMENT_ID="<ocid1.compartment.oc1...>"
${PARENT_COMPARTMENT_ID}
:COMPARTMENT_NAME_OPENSHIFT="$CLUSTER_NAME"
COMPARTMENT_ID_OPENSHIFT=$(oci iam compartment create \
--compartment-id "$PARENT_COMPARTMENT_ID" \
--description "$COMPARTMENT_NAME_OPENSHIFT compartment" \
--name "$COMPARTMENT_NAME_OPENSHIFT" \
--wait-for-state ACTIVE \
--query data.id --raw-output)
There are two methods to provide authentication to Cloud Controller Manager to access the Cloud API:
The steps described in this document are using Instance Principals.
Instance principals require extra steps to grant permissions to the Instances to access
the APIs. The steps below describe how to create the namespace tags, used in the
Dynamic Group rule filtering only the Control Plane nodes to take actions defined in the
compartment's Policy.
Steps:
TAG_NAMESPACE_ID=$(oci iam tag-namespace create \
--compartment-id "${COMPARTMENT_ID_OPENSHIFT}" \
--description "Cluster Name" \
--name "$CLUSTER_NAME" \
--wait-for-state ACTIVE \
--query data.id --raw-output)
oci iam tag create \
--description "OpenShift Node Role" \
--name "role" \
--tag-namespace-id "$TAG_NAMESPACE_ID" \
--validator '{"validatorType":"ENUM","values":["master","worker"]}'
demo-${CLUSTER_NAME}-controlplane
with the following rule:DYNAMIC_GROUP_NAME="${CLUSTER_NAME}-controlplane"
oci iam dynamic-group create \
--name "${DYNAMIC_GROUP_NAME}" \
--description "Control Plane nodes for ${CLUSTER_NAME}" \
--matching-rule "Any {instance.compartment.id='$COMPARTMENT_ID_OPENSHIFT', tag.${CLUSTER_NAME}.role.value='master'}" \
--wait-for-state ACTIVE
$DYNAMIC_GROUP_NAME
$COMPARTMENT_NAME_OPENSHIFT
):POLICY_NAME="${CLUSTER_NAME}-cloud-controller-manager"
oci iam policy create --name $POLICY_NAME \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--description "Allow Cloud Controller Manager in OpenShift access Cloud Resources" \
--statements "[
\"Allow dynamic-group $DYNAMIC_GROUP_NAME to manage volume-family in compartment $COMPARTMENT_NAME_OPENSHIFT\",
\"Allow dynamic-group $DYNAMIC_GROUP_NAME to manage instance-family in compartment $COMPARTMENT_NAME_OPENSHIFT\",
\"Allow dynamic-group $DYNAMIC_GROUP_NAME to manage security-lists in compartment $COMPARTMENT_NAME_OPENSHIFT\",
\"Allow dynamic-group $DYNAMIC_GROUP_NAME to use virtual-network-family in compartment $COMPARTMENT_NAME_OPENSHIFT\",
\"Allow dynamic-group $DYNAMIC_GROUP_NAME to manage load-balancers in compartment $COMPARTMENT_NAME_OPENSHIFT\"]"
!!! tip "Helper"
OCI CLI documentation for oci iam policy create
OCI Console path: `Menu > Identity & Security > Policies >
(Select the Compartment 'openshift') > Create Policy > Name=openshift-oci-cloud-controller-manager`
The OCI VCN (Virtual Cloud Network) must be created using the Networking requirements for user-provisioned infrastructure.
!!! tip "Info"
The resource name provided in this guide is not a standard but follows
a similar naming convention created by the installer in the supported cloud
providers. The names will also be used in future sections to discover resources.
Create the VCN and dependencies with the following configuration:
Resource | Name | Attributes | Note |
---|---|---|---|
VCN | ${CLUSTER_NAME}-vcn |
CIDR 10.0.0.0/16 | |
Subnet | ${CLUSTER_NAME}-net-public |
10.0.0.0/20 | Regional,Resolve DNS (pub) |
Subnet | ${CLUSTER_NAME}-net-private |
10.0.128.0/20 | Regional,Resolve DNS (priv) |
Internet Gateway | ${CLUSTER_NAME}-igw |
– | Attached to public route table |
NAT Gateway | ${CLUSTER_NAME}-natgw |
– | Attached to private route table |
Route Table | ${CLUSTER_NAME}-rtb-public |
0/0 to igw |
– |
Route Table | ${CLUSTER_NAME}-rtb-private |
0/0 to natgw |
– |
NSG | ${CLUSTER_NAME}-nsg-nlb |
– | Attached to Load Balancer |
NSG | ${CLUSTER_NAME}-nsg-controlplane |
– | Attached to Control Plane nodes |
NSG | ${CLUSTER_NAME}-nsg-compute |
– | Attached to Compute nodes |
Steps:
# Base doc for network service
# https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/network.html
# VCN
## https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/network/vcn/create.html
VCN_ID=$(oci network vcn create \
--compartment-id "${COMPARTMENT_ID_OPENSHIFT}" \
--display-name "${CLUSTER_NAME}-vcn" \
--cidr-block "10.0.0.0/20" \
--dns-label "ocp" \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
# IGW
## https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/network/internet-gateway/create.html
IGW_ID=$(oci network internet-gateway create \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--display-name "${CLUSTER_NAME}-igw" \
--is-enabled true \
--wait-for-state AVAILABLE \
--vcn-id $VCN_ID \
--query data.id --raw-output)
# NAT Gateway
## https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/network/nat-gateway/create.html
NGW_ID=$(oci network nat-gateway create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--display-name "${CLUSTER_NAME}-natgw" \
--vcn-id $VCN_ID \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
# Route Table: Public
## https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/network/route-table/create.html
RTB_PUB_ID=$(oci network route-table create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--vcn-id $VCN_ID \
--display-name "${CLUSTER_NAME}-rtb-public" \
--route-rules "[{\"cidrBlock\":\"0.0.0.0/0\",\"networkEntityId\":\"$IGW_ID\"}]" \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
# Route Table: Private
RTB_PVT_ID=$(oci network route-table create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--vcn-id $VCN_ID \
--display-name "${CLUSTER_NAME}-rtb-private" \
--route-rules "[{\"cidrBlock\":\"0.0.0.0/0\",\"networkEntityId\":\"$NGW_ID\"}]" \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
# Subnet Public (regional)
# https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/network/subnet/create.html
SUBNET_ID_PUBLIC=$(oci network subnet create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--vcn-id $VCN_ID \
--display-name "${CLUSTER_NAME}-net-public" \
--dns-label "pub" \
--cidr-block "10.0.0.0/21" \
--route-table-id $RTB_PUB_ID \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
# Subnet Private (regional)
SUBNET_ID_PRIVATE=$(oci network subnet create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--vcn-id $VCN_ID \
--display-name "${CLUSTER_NAME}-net-private" \
--dns-label "priv" \
--cidr-block "10.0.8.0/21" \
--route-table-id $RTB_PVT_ID \
--prohibit-internet-ingress true \
--prohibit-public-ip-on-vnic true \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
# NSGs (empty to allow be referenced in the rules)
## NSG Control Plane
## https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/network/nsg/create.html
NSG_ID_CPL=$(oci network nsg create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--vcn-id $VCN_ID \
--display-name "${CLUSTER_NAME}-nsg-controlplane" \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
## NSG Compute/workers
NSG_ID_CMP=$(oci network nsg create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--vcn-id $VCN_ID \
--display-name "${CLUSTER_NAME}-nsg-compute" \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
## NSG Load Balancers
NSG_ID_NLB=$(oci network nsg create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--vcn-id $VCN_ID \
--display-name "${CLUSTER_NAME}-nsg-nlb" \
--wait-for-state AVAILABLE \
--query data.id --raw-output)
# NSG Rules: Control Plane NSG
## https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/network/nsg/rules/add.html
# oci network NSG rules add --generate-param-json-input security-rules
cat <<EOF > ./oci-vcn-nsg-rule-nodes.json
[
{
"description": "allow all outbound traffic",
"protocol": "all", "destination": "0.0.0.0/0", "destination-type": "CIDR_BLOCK",
"direction": "EGRESS", "is-stateless": false
},
{
"description": "All from control plane NSG",
"direction": "INGRESS", "is-stateless": false,
"protocol": "all",
"source": "$NSG_ID_CPL", "source-type": "NETWORK_SECURITY_GROUP"
},
{
"description": "All from control plane NSG",
"direction": "INGRESS", "is-stateless": false,
"protocol": "all",
"source": "$NSG_ID_CMP", "source-type": "NETWORK_SECURITY_GROUP"
},
{
"description": "All from control plane NSG",
"direction": "INGRESS", "is-stateless": false,
"protocol": "all",
"source": "$NSG_ID_NLB", "source-type": "NETWORK_SECURITY_GROUP"
},
{
"description": "allow ssh to nodes",
"direction": "INGRESS", "is-stateless": false,
"protocol": "6",
"source": "0.0.0.0/0", "source-type": "CIDR_BLOCK",
"tcp-options": {
"destination-port-range": {
"max": 22,
"min": 22
}
}
}
]
EOF
oci network nsg rules add \
--nsg-id "${NSG_ID_CPL}" \
--security-rules file://oci-vcn-nsg-rule-nodes.json
oci network nsg rules add \
--nsg-id "${NSG_ID_CMP}" \
--security-rules file://oci-vcn-nsg-rule-nodes.json
# NSG Security rules for NSG
cat <<EOF > ./oci-vcn-nsg-rule-nlb.json
[
{
"description": "allow Kube API",
"direction": "INGRESS", "is-stateless": false,
"source-type": "CIDR_BLOCK", "protocol": "6", "source": "0.0.0.0/0",
"tcp-options": { "destination-port-range": {
"max": 6443, "min": 6443
}}
},
{
"description": "allow Kube API to Control Plane",
"destination": "$NSG_ID_CPL",
"destination-type": "NETWORK_SECURITY_GROUP",
"direction": "EGRESS", "is-stateless": false,
"protocol": "6", "tcp-options":{"destination-port-range":{
"max": 6443, "min": 6443
}}
},
{
"description": "allow MCS listener from control plane pool",
"direction": "INGRESS",
"is-stateless": false, "protocol": "6",
"source": "$NSG_ID_CPL", "source-type": "NETWORK_SECURITY_GROUP",
"tcp-options": {"destination-port-range":{
"max": 22623, "min": 22623
}}
},
{
"description": "allow MCS listener from compute pool",
"direction": "INGRESS",
"is-stateless": false, "protocol": "6",
"source": "$NSG_ID_CMP", "source-type": "NETWORK_SECURITY_GROUP",
"tcp-options": {"destination-port-range": {
"max": 22623, "min": 22623
}}
},
{
"description": "allow MCS listener access the Control Plane backends",
"destination": "$NSG_ID_CPL",
"destination-type": "NETWORK_SECURITY_GROUP",
"direction": "EGRESS", "is-stateless": false,
"protocol": "6", "tcp-options": {"destination-port-range": {
"max": 22623, "min": 22623
}}
},
{
"description": "allow listener for Ingress HTTP",
"direction": "INGRESS", "is-stateless": false,
"source-type": "CIDR_BLOCK", "protocol": "6", "source": "0.0.0.0/0",
"tcp-options": {"destination-port-range": {
"max": 80, "min": 80
}}
},
{
"description": "allow listener for Ingress HTTPS",
"direction": "INGRESS", "is-stateless": false,
"source-type": "CIDR_BLOCK", "protocol": "6", "source": "0.0.0.0/0",
"tcp-options": {"destination-port-range": {
"max": 443, "min": 443
}}
},
{
"description": "allow backend access the Compute pool for HTTP",
"destination": "$NSG_ID_CMP",
"destination-type": "NETWORK_SECURITY_GROUP",
"direction": "EGRESS", "is-stateless": false,
"protocol": "6", "tcp-options": {"destination-port-range": {
"max": 80, "min": 80
}}
},
{
"description": "allow backend access the Compute pool for HTTPS",
"destination": "$NSG_ID_CMP",
"destination-type": "NETWORK_SECURITY_GROUP",
"direction": "EGRESS", "is-stateless": false,
"protocol": "6", "tcp-options": {"destination-port-range": {
"max": 443, "min": 443
}}
}
]
EOF
oci network nsg rules add \
--nsg-id "${NSG_ID_NLB}" \
--security-rules file://oci-vcn-nsg-rule-nlb.json
Steps to create the OCI Network Load Balancer (NLB) to the cluster.
A single NLB is created with listeners to Kubernetes API Server, Machine
Config Server (MCS) and Ingress for HTTP and HTTPS. The MCS is
the only one with internal access.
The following resources will be created in the NLB:
BSet Name | Port | Health Check (Proto/Path/Interval/Timeout) |
---|---|---|
${CLUSTER_NAME}-api |
TCP/6443 | HTTPS/readyz /10/3 |
${CLUSTER_NAME}-mcs |
TCP/22623 | HTTPS/healthz /10/3 |
${CLUSTER_NAME}-http |
TCP/80 | TCP/80/10/3 |
${CLUSTER_NAME}-https |
TCP/443 | TCP/443/10/3 |
Name | Port | BSet Name |
---|---|---|
${CLUSTER_NAME}-api |
TCP/6443 | ${CLUSTER_NAME}-api |
${CLUSTER_NAME}-mcs |
TCP/22623 | ${CLUSTER_NAME}-mcs |
${CLUSTER_NAME}-http |
TCP/80 | ${CLUSTER_NAME}-http |
${CLUSTER_NAME}-https |
TCP/443 | ${CLUSTER_NAME}-https |
Steps:
# NLB base: https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/nlb.html
# Create BackendSets
## Kubernetes API Server (KAS): api
## Machine Config Server (MCS): mcs
## Ingress HTTP
## Ingress HTTPS
cat <<EOF > ./oci-nlb-backends.json
{
"${CLUSTER_NAME}-api": {
"health-checker": {
"interval-in-millis": 10000,
"port": 6443,
"protocol": "HTTPS",
"retries": 3,
"return-code": 200,
"timeout-in-millis": 3000,
"url-path": "/readyz"
},
"ip-version": "IPV4",
"is-preserve-source": false,
"name": "${CLUSTER_NAME}-api",
"policy": "FIVE_TUPLE"
},
"${CLUSTER_NAME}-mcs": {
"health-checker": {
"interval-in-millis": 10000,
"port": 22623,
"protocol": "HTTPS",
"retries": 3,
"return-code": 200,
"timeout-in-millis": 3000,
"url-path": "/healthz"
},
"ip-version": "IPV4",
"is-preserve-source": false,
"name": "${CLUSTER_NAME}-mcs",
"policy": "FIVE_TUPLE"
},
"${CLUSTER_NAME}-ingress-http": {
"health-checker": {
"interval-in-millis": 10000,
"port": 80,
"protocol": "TCP",
"retries": 3,
"timeout-in-millis": 3000
},
"ip-version": "IPV4",
"is-preserve-source": false,
"name": "${CLUSTER_NAME}-ingress-http",
"policy": "FIVE_TUPLE"
},
"${CLUSTER_NAME}-ingress-https": {
"health-checker": {
"interval-in-millis": 10000,
"port": 443,
"protocol": "TCP",
"retries": 3,
"timeout-in-millis": 3000
},
"ip-version": "IPV4",
"is-preserve-source": false,
"name": "${CLUSTER_NAME}-ingress-https",
"policy": "FIVE_TUPLE"
}
}
EOF
cat <<EOF > ./oci-nlb-listeners.json
{
"${CLUSTER_NAME}-api": {
"default-backend-set-name": "${CLUSTER_NAME}-api",
"ip-version": "IPV4",
"name": "${CLUSTER_NAME}-api",
"port": 6443,
"protocol": "TCP"
},
"${CLUSTER_NAME}-mcs": {
"default-backend-set-name": "${CLUSTER_NAME}-mcs",
"ip-version": "IPV4",
"name": "${CLUSTER_NAME}-mcs",
"port": 22623,
"protocol": "TCP"
},
"${CLUSTER_NAME}-ingress-http": {
"default-backend-set-name": "${CLUSTER_NAME}-ingress-http",
"ip-version": "IPV4",
"name": "${CLUSTER_NAME}-ingress-http",
"port": 80,
"protocol": "TCP"
},
"${CLUSTER_NAME}-ingress-https": {
"default-backend-set-name": "${CLUSTER_NAME}-ingress-https",
"ip-version": "IPV4",
"name": "${CLUSTER_NAME}-ingress-https",
"port": 443,
"protocol": "TCP"
}
}
EOF
# NLB create
# https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/nlb/network-load-balancer/create.html
NLB_ID=$(oci nlb network-load-balancer create \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--display-name "${CLUSTER_NAME}-nlb" \
--subnet-id "${SUBNET_ID_PUBLIC}" \
--backend-sets file://oci-nlb-backends.json \
--listeners file://oci-nlb-listeners.json \
--network-security-group-ids "[\"$NSG_ID_NLB\"]" \
--is-private false \
--nlb-ip-version "IPV4" \
--wait-for-state ACCEPTED \
--query data.id --raw-output)
Steps to create the resource records pointing to the API address (public and private), and
to the default router.
The following DNS records will be created:
Domain | Record | Value |
---|---|---|
${CLUSTER_NAME} .${BASE_DOMAIN} |
api | Public IP Address or DNS for the Load Balancer |
${CLUSTER_NAME} .${BASE_DOMAIN} |
api-int | Private IP Address or DNS for the Load Balancer |
${CLUSTER_NAME} .${BASE_DOMAIN} |
*.apps | Public IP Address or DNS for the Load Balancer |
!!! tip "Helper"
It's not required to have a publicly accessible API and DNS domain, alternatively, you can use a bastion host to access the private API endpoint.
Steps:
# NLB IPs
## https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.30.2/oci_cli_docs/cmdref/nlb/network-load-balancer/list.html
## Public
NLB_IP_PUBLIC=$(oci nlb network-load-balancer list \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--display-name "${CLUSTER_NAME}-nlb" \
| jq -r '.data.items[0]["ip-addresses"][] | select(.["is-public"]==true) | .["ip-address"]')
## Private
NLB_IP_PRIVATE=$(oci nlb network-load-balancer list \
--compartment-id ${COMPARTMENT_ID_OPENSHIFT} \
--display-name "${CLUSTER_NAME}-nlb" \
| jq -r '.data.items[0]["ip-addresses"][] | select(.["is-public"]==false) | .["ip-address"]')
# DNS record
## Assuming the zone already exists and is in DNS_COMPARTMENT_ID
DNS_RECORD_APIINT="api-int.${CLUSTER_NAME}.${BASE_DOMAIN}"
oci dns record rrset patch \
--compartment-id ${DNS_COMPARTMENT_ID} \
--domain "${DNS_RECORD_APIINT}" \
--rtype "A" \
--zone-name-or-id "${BASE_DOMAIN}" \
--scope GLOBAL \
--items "[{
\"domain\": \"${DNS_RECORD_APIINT}\",
\"rdata\": \"${NLB_IP_PRIVATE}\",
\"rtype\": \"A\", \"ttl\": 300
}]"
DNS_RECORD_APIEXT="api.${CLUSTER_NAME}.${BASE_DOMAIN}"
oci dns record rrset patch \
--compartment-id ${DNS_COMPARTMENT_ID} \
--domain "${DNS_RECORD_APIEXT}" \
--rtype "A" \
--zone-name-or-id "${BASE_DOMAIN}" \
--scope GLOBAL \
--items "[{
\"domain\": \"${DNS_RECORD_APIEXT}\",
\"rdata\": \"${NLB_IP_PUBLIC}\",
\"rtype\": \"A\", \"ttl\": 300
}]"
DNS_RECORD_APPS="*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}"
oci dns record rrset patch \
--compartment-id ${DNS_COMPARTMENT_ID} \
--domain "${DNS_RECORD_APPS}" \
--rtype "A" \
--zone-name-or-id "${BASE_DOMAIN}" \
--scope GLOBAL \
--items "[{
\"domain\": \"${DNS_RECORD_APPS}\",
\"rdata\": \"${NLB_IP_PUBLIC}\",
\"rtype\": \"A\", \"ttl\": 300
}]"
This section describes how to set up OpenShift to customize the manifests
used in the installation.
Modify and export the variables used to build the install-config.yaml
and
the later steps:
INSTALL_DIR=./install-dir
mkdir -p $INSTALL_DIR
SSH_PUB_KEY_FILE="${HOME}/.ssh/bundle.pub"
PULL_SECRET_FILE="${HOME}/.openshift/pull-secret-latest.json"
Create the install-config.yaml
setting the platform type to external
:
cat <<EOF > ${INSTALL_DIR}/install-config.yaml
apiVersion: v1
baseDomain: ${BASE_DOMAIN}
metadata:
name: "${CLUSTER_NAME}"
platform:
external:
platformName: oci
publish: External
pullSecret: >
$(cat ${PULL_SECRET_FILE})
sshKey: |
$(cat ${SSH_PUB_KEY_FILE})
EOF
openshift-install create manifests --dir $INSTALL_DIR
The steps in this section describe how to customize the OpenShift installation
providing the Cloud Controller Manager manifests to be added in the bootstrap process.
!!! warning "Info"
This guide is based on the OCI CCM v1.26.0. You must read the
project documentation
for more information.
Steps:
!!! danger "Important"
Red Hat does not recommend creating resources in namespaces prefixed with kube-*
and openshift-*
.
The custom namespace manifest must be created, and then deployment manifests must
be adapted to use the custom namespace.
See [the documentation](https://docs.openshift.com/container-platform/4.13/applications/projects/working-with-projects.html) for more information.
OCI_CCM_NAMESPACE=oci-cloud-controller-manager
cat <<EOF > ${INSTALL_DIR}/manifests/oci-00-ccm-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: $OCI_CCM_NAMESPACE
annotations:
workload.openshift.io/allowed: management
include.release.openshift.io/self-managed-high-availability: "true"
labels:
"pod-security.kubernetes.io/enforce": "privileged"
"pod-security.kubernetes.io/audit": "privileged"
"pod-security.kubernetes.io/warn": "privileged"
"security.openshift.io/scc.podSecurityLabelSync": "false"
"openshift.io/run-level": "0"
"pod-security.kubernetes.io/enforce-version": "v1.24"
EOF
OCI_CLUSTER_REGION=us-sanjose-1
# Review the defined vars
cat <<EOF>/dev/stdout
OCI_CLUSTER_REGION=$OCI_CLUSTER_REGION
VCN_ID=$VCN_ID
SUBNET_ID_PUBLIC=$SUBNET_ID_PUBLIC
EOF
cat <<EOF > ./oci-secret-cloud-provider.yaml
auth:
region: $OCI_CLUSTER_REGION
useInstancePrincipals: true
compartment: $COMPARTMENT_ID_OPENSHIFT
vcn: $VCN_ID
loadBalancer:
securityListManagementMode: None
subnet1: $SUBNET_ID_PUBLIC
EOF
cat <<EOF > ${INSTALL_DIR}/manifests/oci-01-ccm-00-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: oci-cloud-controller-manager
namespace: $OCI_CCM_NAMESPACE
data:
cloud-provider.yaml: $(base64 -w0 < ./oci-secret-cloud-provider.yaml)
EOF
${INSTALL_DIR}/manifests
:CCM_RELEASE=v1.26.0
wget https://github.com/oracle/oci-cloud-controller-manager/releases/download/${CCM_RELEASE}/oci-cloud-controller-manager-rbac.yaml -O oci-cloud-controller-manager-rbac.yaml
wget https://github.com/oracle/oci-cloud-controller-manager/releases/download/${CCM_RELEASE}/oci-cloud-controller-manager.yaml -O oci-cloud-controller-manager.yaml
ServiceAccount
:./yq ". | select(.kind==\"ServiceAccount\").metadata.namespace=\"$OCI_CCM_NAMESPACE\"" oci-cloud-controller-manager-rbac.yaml > ./oci-cloud-controller-manager-rbac_patched.yaml
ServiceAccount
:cat << EOF > ./oci-ccm-rbac_patch_crb-subject.yaml
- kind: ServiceAccount
name: cloud-controller-manager
namespace: $OCI_CCM_NAMESPACE
EOF
./yq eval-all -i ". | select(.kind==\"ClusterRoleBinding\").subjects *= load(\"oci-ccm-rbac_patch_crb-subject.yaml\")" ./oci-cloud-controller-manager-rbac_patched.yaml
./yq -s '"./oci-01-ccm-01-rbac_" + $index' ./oci-cloud-controller-manager-rbac_patched.yaml &&\
mv -v ./oci-01-ccm-01-rbac_*.yml ${INSTALL_DIR}/manifests/
cat <<EOF > ./oci-cloud-controller-manager-ds_patch1.yaml
metadata:
namespace: $OCI_CCM_NAMESPACE
spec:
template:
spec:
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoSchedule
EOF
# Create the containers' env patch
cat <<EOF > ./oci-cloud-controller-manager-ds_patch2.yaml
spec:
template:
spec:
containers:
- env:
- name: KUBERNETES_PORT
value: "tcp://api-int.$CLUSTER_NAME.$BASE_DOMAIN:6443"
- name: KUBERNETES_PORT_443_TCP
value: "tcp://api-int.$CLUSTER_NAME.$BASE_DOMAIN:6443"
- name: KUBERNETES_PORT_443_TCP_ADDR
value: "api-int.$CLUSTER_NAME.$BASE_DOMAIN"
- name: KUBERNETES_PORT_443_TCP_PORT
value: "6443"
- name: KUBERNETES_PORT_443_TCP_PROTO
value: "tcp"
- name: KUBERNETES_SERVICE_HOST
value: "api-int.$CLUSTER_NAME.$BASE_DOMAIN"
- name: KUBERNETES_SERVICE_PORT
value: "6443"
- name: KUBERNETES_SERVICE_PORT_HTTPS
value: "6443"
EOF
# Merge required objects for the pod's template spec
./yq eval-all '. as $item ireduce ({}; . *+ $item)' oci-cloud-controller-manager.yaml oci-cloud-controller-manager-ds_patch1.yaml > oci-cloud-controller-manager-ds_patched1.yaml
# Merge required objects for the pod's containers spec
./yq eval-all '.spec.template.spec.containers[] as $item ireduce ({}; . *+ $item)' oci-cloud-controller-manager-ds_patched1.yaml ./oci-cloud-controller-manager-ds_patch2.yaml > ./oci-cloud-controller-manager-ds_patched2.yaml
# merge patches to ${INSTALL_DIR}/manifests/oci-01-ccm-02-daemonset.yaml
./yq eval-all '.spec.template.spec.containers[] *= load("./oci-cloud-controller-manager-ds_patched2.yaml")' oci-cloud-controller-manager-ds_patched1.yaml > ${INSTALL_DIR}/manifests/oci-01-ccm-02-daemonset.yaml
The following CCM manifest files must be created in the installation manifests/
directory:
$ tree $INSTALL_DIR/manifests/
[...]
├── oci-00-ccm-namespace.yaml
├── oci-01-ccm-00-secret.yaml
├── oci-01-ccm-01-rbac_0.yml
├── oci-01-ccm-01-rbac_1.yml
├── oci-01-ccm-01-rbac_2.yml
├── oci-01-ccm-02-daemonset.yaml
[...]
The Kubelet parameter providerID
is the unique identifier of the instance in OCI.
It must be set before the node is initialized by CCM using a custom MachineConfig.
The Provider ID must be set dynamically for each node. The steps below describe
how to create a MachineConfig object to create a systemd unit to create a kubelet
configuration discovering the Provider ID in OCI by querying the
Instance Metadata Service (IMDS).
Steps:
function create_machineconfig_kubelet() {
local node_role=$1
cat << EOF > ./mc-kubelet-$node_role.bu
variant: openshift
version: 4.13.0
metadata:
name: 00-$node_role-kubelet-providerid
labels:
machineconfiguration.openshift.io/role: $node_role
storage:
files:
- mode: 0755
path: "/usr/local/bin/kubelet-providerid"
contents:
inline: |
#!/bin/bash
set -e -o pipefail
NODECONF=/etc/systemd/system/kubelet.service.d/20-providerid.conf
if [ -e "\${NODECONF}" ]; then
echo "Not replacing existing \${NODECONF}"
exit 0
fi
PROVIDERID=\$(curl -H "Authorization: Bearer Oracle" -sL http://169.254.169.254/opc/v2/instance/ | jq -r .id);
cat > "\${NODECONF}" <<EOF
[Service]
Environment="KUBELET_PROVIDERID=\${PROVIDERID}"
EOF
systemd:
units:
- name: kubelet-providerid.service
enabled: true
contents: |
[Unit]
Description=Fetch kubelet provider id from Metadata
After=NetworkManager-wait-online.service
Before=kubelet.service
[Service]
ExecStart=/usr/local/bin/kubelet-providerid
Type=oneshot
[Install]
WantedBy=network-online.target
EOF
}
create_machineconfig_kubelet "master"
create_machineconfig_kubelet "worker"
MachineConfig
objects:function process_butane() {
local src_file=$1; shift
local dest_file=$1
./butane $src_file -o $dest_file
}
process_butane "./mc-kubelet-master.bu" "${INSTALL_DIR}/openshift/99_openshift-machineconfig_00-master-kubelet-providerid.yaml"
process_butane "./mc-kubelet-worker.bu" "${INSTALL_DIR}/openshift/99_openshift-machineconfig_00-worker-kubelet-providerid.yaml"
The MachineConfig files must exist:
ls ${INSTALL_DIR}/openshift/99_openshift-machineconfig_00-*-kubelet-providerid.yaml
Once the manifests are placed, you can create the cluster ignition configurations:
openshift-install create ignition-configs --dir $INSTALL_DIR
The ignition files must be generated in the install directory (files with extension *.ign
):
$ tree $INSTALL_DIR
/path/to/install-dir
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
The first part of this section describes how to create the compute nodes and dependencies,
once the instances are provisioned, the bootstrap will initialize the control plane, and then
when control plane nodes join the cluster, are initialized by CCM, and the control plane
workloads scheduled, the bootstrap will be completed.
The second part describes how to approve the CSR for worker nodes, and to review
the cluster installation.
Every node role uses different ignition files. The following table shows which
ignition file is required for each node role:
Node Name | Ignition file | Fetch source |
---|---|---|
bootstrap | ${PWD}/user-data-bootstrap.json |
Preauthenticated URL |
control planes nodes (pool) | ${INSTALL_DIR}/master.json |
Internal Load Balancer (MCS) |
compute nodes (pool) | ${INSTALL_DIR}/worker.json |
Internal Load Balancer (MCS) |
Run the following commands to populate the values for the environment variables required to create instances:
IMAGE_ID
: Custom RHCOS image previously uploaded.SUBNET_ID_PUBLIC
: Public regional subnet used in bootstrap.SUBNET_ID_PRIVATE
: Private regional subnet used to create control plane and compute nodes.NSG_ID_CPL
: Network Security Group ID used in Control Planes# Gather subnet IDs
SUBNET_ID_PUBLIC=$(oci network subnet list --compartment-id $COMPARTMENT_ID_OPENSHIFT \
| jq -r '.data[] | select(.["display-name"] | endswith("public")).id')
SUBNET_ID_PRIVATE=$(oci network subnet list --compartment-id $COMPARTMENT_ID_OPENSHIFT \
| jq -r '.data[] | select(.["display-name"] | endswith("private")).id')
# Gather the Network Security group for the control plane
NSG_ID_CPL=$(oci network nsg list -c $COMPARTMENT_ID_OPENSHIFT \
| jq -r '.data[] | select(.["display-name"] | endswith("controlplane")).id')
NSG_ID_CMP=$(oci network nsg list -c $COMPARTMENT_ID_OPENSHIFT \
| jq -r '.data[] | select(.["display-name"] | endswith("compute")).id')
!!! warning "Check if required variables have values before proceeding"
cat <<EOF>/dev/stdout COMPARTMENT_ID_OPENSHIFT=$COMPARTMENT_ID_OPENSHIFT SUBNET_ID_PUBLIC=$SUBNET_ID_PUBLIC SUBNET_ID_PRIVATE=$SUBNET_ID_PRIVATE NSG_ID_CPL=$NSG_ID_CPL EOF
!!! tip "Helper - OCI CLI documentation"
- oci compute image list
- oci network subnet list
- oci network nsg list
The image used in this guide is QCOW2. The openshift-install
command
provides the option coreos print-stream-json
to show all the available
artifacts. The steps below describe how to download the image, upload it to
an OCI bucket, and then create a custom image.
IMAGE_NAME=$(basename $(openshift-install coreos print-stream-json | jq -r '.architectures["x86_64"].artifacts["openstack"].formats["qcow2.gz"].disk.location'))
QCOW2
image:wget $(openshift-install coreos print-stream-json | jq -r '.architectures["x86_64"].artifacts["openstack"].formats["qcow2.gz"].disk.location')
BUCKET_NAME="${CLUSTER_NAME}-infra"
oci os bucket create --name $BUCKET_NAME --compartment-id $COMPARTMENT_ID_OPENSHIFT
!!! tip "Helper - OCI CLI documentation"
- oci os bucket create
OCI Console path: `Menu > Storage > Buckets > (Choose the Compartment `openshift`) > Create Bucket`
oci os object put -bn $BUCKET_NAME --name images/${IMAGE_NAME} --file ${IMAGE_NAME}
!!! tip "Helper - OCI CLI documentation"
- oci os object put
OCI Console path: `Menu > Storage > Buckets > (Choose the Compartment `openshift`) > (Choose the Bucket `openshift-infra`) > Objects > Upload`
STORAGE_NAMESPACE=$(oci os ns get | jq -r .data)
oci compute image import from-object -bn $BUCKET_NAME --name images/${IMAGE_NAME} \
--compartment-id $COMPARTMENT_ID_OPENSHIFT -ns $STORAGE_NAMESPACE \
--display-name ${IMAGE_NAME} --launch-mode "PARAVIRTUALIZED" \
--source-image-type "QCOW2"
# Gather the Custom Compute image for RHCOS
IMAGE_ID=$(oci compute image list --compartment-id $COMPARTMENT_ID_OPENSHIFT \
--display-name $IMAGE_NAME | jq -r '.data[0].id')
!!! tip "Helper"
OCI CLI documentation for oci compute image import
OCI CLI documentation for [`oci os ns get`](https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.29.1/oci_cli_docs/cmdref/os/ns/get.html)
The bootstrap node is responsible for creating the temporary control plane and serve
the ignition files to other nodes through the MCS.
The OCI user data has a size limitation that prevents to use of the bootstrap
ignition file directly when launching the node. A new ignition file will be
created replacing it with a remote URL fetching from the temporary Bucket Object URL.
Once the bootstrap instance is created, it must be attached to the load balancer in the
Backend Sets of Kubernetes API Server and Machine Config Server.
Steps:
bootstrap.ign
to the infrastructure bucketoci os object put -bn $BUCKET_NAME --name bootstrap-${CLUSTER_NAME}.ign \
--file $INSTALL_DIR/bootstrap.ign
!!! tip "Helper"
OCI CLI documentation for oci os object put
!!! warning "Attention"
The bucket object URL will expire in one hour if you are planning to create
the bootstrap later, please adjust the $EXPIRES_TIME
.
The install certificates expire 24 hours after the ignition files have been
created, consider regenerating it if the ignitions are older than that.
EXPIRES_TIME=$(date -d '+1 hour' --rfc-3339=seconds)
IGN_BOOTSTRAP_URL=$(oci os preauth-request create --name bootstrap-${CLUSTER_NAME} \
-bn $BUCKET_NAME -on bootstrap-${CLUSTER_NAME}.ign \
--access-type ObjectRead --time-expires "$EXPIRES_TIME" \
| jq -r '.data["full-path"]')
!!! tip "Helper"
OCI CLI documentation for oci os preauth-request create
The generated URL for the ignition file bootstrap.ign
must be available in the $IGN_BOOTSTRAP_URL
.
cat <<EOF > ./user-data-bootstrap.json
{
"ignition": {
"config": {
"replace": {
"source": "${IGN_BOOTSTRAP_URL}"
}
},
"version": "3.1.0"
}
}
EOF
AVAILABILITY_DOMAIN="gzqB:US-SANJOSE-1-AD-1"
INSTANCE_SHAPE="VM.Standard.E4.Flex"
oci compute instance launch \
--hostname-label "bootstrap" \
--display-name "bootstrap" \
--availability-domain "$AVAILABILITY_DOMAIN" \
--fault-domain "FAULT-DOMAIN-1" \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--subnet-id $SUBNET_ID_PUBLIC \
--nsg-ids "[\"$NSG_ID_CPL\"]" \
--shape "$INSTANCE_SHAPE" \
--shape-config "{\"memoryInGBs\":16.0,\"ocpus\":8.0}" \
--source-details "{\"bootVolumeSizeInGBs\":120,\"bootVolumeVpusPerGB\":60,\"imageId\":\"${IMAGE_ID}\",\"sourceType\":\"image\"}" \
--agent-config '{"areAllPluginsDisabled": true}' \
--assign-public-ip True \
--user-data-file "./user-data-bootstrap.json" \
--defined-tags "{\"$CLUSTER_NAME\":{\"role\":\"master\"}}"
!!! tip "Helper - OCI CLI documentation"
- oci compute instance launch
- oci compute shape list
!!! tip "Follow the bootstrap process"
You can SSH to the node and follow the bootstrap process:
journalctl -b -f -u release-image.service -u bootkube.service
BES_API_NAME=$(oci nlb backend-set list --network-load-balancer-id $NLB_ID | jq -r '.data.items[] | select(.name | endswith("api")).name')
BES_MCS_NAME=$(oci nlb backend-set list --network-load-balancer-id $NLB_ID | jq -r '.data.items[] | select(.name | endswith("mcs")).name')
INSTANCE_ID_BOOTSTRAP=$(oci compute instance list -c $COMPARTMENT_ID_OPENSHIFT | jq -r '.data[] | select((.["display-name"]=="bootstrap") and (.["lifecycle-state"]=="RUNNING")).id')
test -z $INSTANCE_ID_BOOTSTRAP && echo "ERR: Bootstrap Instance ID not found=[$INSTANCE_ID_BOOTSTRAP]. Try again."
!!! tip "Helper - OCI CLI documentation"
- oci nlb network-load-balancer list
- oci nlb backend-set list
- oci compute instance list
# oci nlb backend-set update --generate-param-json-input backends
cat <<EOF > ./nlb-bset-backends-api.json
[
{
"isBackup": false,
"isDrain": false,
"isOffline": false,
"name": "${INSTANCE_ID_BOOTSTRAP}:6443",
"port": 6443,
"targetId": "${INSTANCE_ID_BOOTSTRAP}"
}
]
EOF
# Update API Backend Set
oci nlb backend-set update --force \
--backend-set-name $BES_API_NAME \
--network-load-balancer-id $NLB_ID \
--backends file://nlb-bset-backends-api.json \
--wait-for-state SUCCEEDED
!!! tip "Helper - OCI CLI documentation"
- oci nlb backend-set update
cat <<EOF > ./nlb-bset-backends-mcs.json
[
{
"isBackup": false,
"isDrain": false,
"isOffline": false,
"name": "${INSTANCE_ID_BOOTSTRAP}:22623",
"port": 22623,
"targetId": "${INSTANCE_ID_BOOTSTRAP}"
}
]
EOF
oci nlb backend-set update --force \
--backend-set-name $BES_MCS_NAME \
--network-load-balancer-id $NLB_ID \
--backends file://nlb-bset-backends-mcs.json \
--wait-for-state SUCCEEDED
Three control plane instances will be created. The instances is created using
Instance Pool, which will automatically inherit the same configuration and attach
to the required listeners: API and MCS.
INSTANCE_CONFIG_CONTROLPLANE="${CLUSTER_NAME}-controlplane"
# To generate all the options:
# oci compute-management instance-configuration create --generate-param-json-input instance-details
cat <<EOF > ./instance-config-details-controlplanes.json
{
"instanceType": "compute",
"launchDetails": {
"agentConfig": {"areAllPluginsDisabled": true},
"compartmentId": "$COMPARTMENT_ID_OPENSHIFT",
"createVnicDetails": {
"assignPrivateDnsRecord": true,
"assignPublicIp": false,
"nsgIds": ["$NSG_ID_CPL"],
"subnetId": "$SUBNET_ID_PRIVATE"
},
"definedTags": {
"$CLUSTER_NAME": {
"role": "master"
}
},
"displayName": "${CLUSTER_NAME}-controlplane",
"launchMode": "PARAVIRTUALIZED",
"metadata": {"user_data": "$(base64 -w0 < $INSTALL_DIR/master.ign)"},
"shape": "$INSTANCE_SHAPE",
"shapeConfig": {"memoryInGBs":16.0,"ocpus":8.0},
"sourceDetails": {"bootVolumeSizeInGBs":120,"bootVolumeVpusPerGB":60,"imageId":"${IMAGE_ID}","sourceType":"image"}
}
}
EOF
oci compute-management instance-configuration create \
--display-name "$INSTANCE_CONFIG_CONTROLPLANE" \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--instance-details file://instance-config-details-controlplanes.json
INSTANCE_POOL_CONTROLPLANE="${CLUSTER_NAME}-controlplane"
INSTANCE_CONFIG_ID_CPL=$(oci compute-management instance-configuration list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
| jq -r ".data[] | select(.[\"display-name\"] | startswith(\"$INSTANCE_CONFIG_CONTROLPLANE\")).id")
#
# oci compute-management instance-pool create --generate-param-json-input load-balancers
cat <<EOF > ./instance-pool-loadbalancers-cpl.json
[
{
"backendSetName": "$BES_API_NAME",
"loadBalancerId": "$NLB_ID",
"port": 6443,
"vnicSelection": "PrimaryVnic"
},
{
"backendSetName": "$BES_MCS_NAME",
"loadBalancerId": "$NLB_ID",
"port": 22623,
"vnicSelection": "PrimaryVnic"
}
]
EOF
# oci compute-management instance-pool create --generate-param-json-input placement-configurations
cat <<EOF > ./instance-pool-placement.json
[
{
"availabilityDomain": "$AVAILABILITY_DOMAIN",
"faultDomains": ["FAULT-DOMAIN-1","FAULT-DOMAIN-2","FAULT-DOMAIN-3"],
"primarySubnetId": "$SUBNET_ID_PRIVATE",
}
]
EOF
oci compute-management instance-pool create \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--instance-configuration-id "$INSTANCE_CONFIG_ID_CPL" \
--size 0 \
--display-name "$INSTANCE_POOL_CONTROLPLANE" \
--placement-configurations "file://instance-pool-placement.json" \
--load-balancers file://instance-pool-loadbalancers-cpl.json
!!! tip "Helper - OCI CLI documentation"
- oci compute-management instance-pool create
--size
can be adjusted when creating the Instance Pool):INSTANCE_POOL_ID_CPL=$(oci compute-management instance-pool list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
| jq -r ".data[] | select(
(.[\"display-name\"]==\"$INSTANCE_POOL_CONTROLPLANE\") and
(.[\"lifecycle-state\"]==\"RUNNING\")
).id")
oci compute-management instance-pool update --instance-pool-id $INSTANCE_POOL_ID_CPL --size 3
!!! tip "Helper - OCI CLI documentation"
- oci compute-management instance-pool update
INSTANCE_CONFIG_COMPUTE="${CLUSTER_NAME}-compute"
# oci compute-management instance-configuration create --generate-param-json-input instance-details
cat <<EOF > ./instance-config-details-compute.json
{
"instanceType": "compute",
"launchDetails": {
"agentConfig": {"areAllPluginsDisabled": true},
"compartmentId": "$COMPARTMENT_ID_OPENSHIFT",
"createVnicDetails": {
"assignPrivateDnsRecord": true,
"assignPublicIp": false,
"nsgIds": ["$NSG_ID_CMP"],
"subnetId": "$SUBNET_ID_PRIVATE"
},
"definedTags": {
"$CLUSTER_NAME": {
"role": "worker"
}
},
"displayName": "${CLUSTER_NAME}-worker",
"launchMode": "PARAVIRTUALIZED",
"metadata": {"user_data": "$(base64 -w0 < $INSTALL_DIR/worker.ign)"},
"shape": "$INSTANCE_SHAPE",
"shapeConfig": {"memoryInGBs":16.0,"ocpus":8.0},
"sourceDetails": {"bootVolumeSizeInGBs":120,"bootVolumeVpusPerGB":20,"imageId":"${IMAGE_ID}","sourceType":"image"}
}
}
EOF
oci compute-management instance-configuration create \
--display-name "$INSTANCE_CONFIG_COMPUTE" \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--instance-details file://instance-config-details-compute.json
INSTANCE_POOL_COMPUTE="${CLUSTER_NAME}-compute"
INSTANCE_CONFIG_ID_CMP=$(oci compute-management instance-configuration list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
| jq -r ".data[] | select(.[\"display-name\"] | startswith(\"$INSTANCE_CONFIG_COMPUTE\")).id")
BES_HTTP_NAME=$(oci nlb backend-set list --network-load-balancer-id $NLB_ID \
| jq -r '.data.items[] | select(.name | endswith("http")).name')
BES_HTTPS_NAME=$(oci nlb backend-set list --network-load-balancer-id $NLB_ID \
| jq -r '.data.items[] | select(.name | endswith("https")).name')
#
# oci compute-management instance-pool create --generate-param-json-input load-balancers
cat <<EOF > ./instance-pool-loadbalancers-cmp.json
[
{
"backendSetName": "$BES_HTTP_NAME",
"loadBalancerId": "$NLB_ID",
"port": 80,
"vnicSelection": "PrimaryVnic"
},
{
"backendSetName": "$BES_HTTPS_NAME",
"loadBalancerId": "$NLB_ID",
"port": 443,
"vnicSelection": "PrimaryVnic"
}
]
EOF
oci compute-management instance-pool create \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--instance-configuration-id "$INSTANCE_CONFIG_ID_CMP" \
--size 0 \
--display-name "$INSTANCE_POOL_COMPUTE" \
--placement-configurations "[{\"availabilityDomain\":\"$AVAILABILITY_DOMAIN\",\"faultDomains\":[\"FAULT-DOMAIN-1\",\"FAULT-DOMAIN-2\",\"FAULT-DOMAIN-3\"],\"primarySubnetId\":\"$SUBNET_ID_PRIVATE\"}]" \
--load-balancers file://instance-pool-loadbalancers-cmp.json
INSTANCE_POOL_ID_CMP=$(oci compute-management instance-pool list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
| jq -r ".data[] | select(
(.[\"display-name\"]==\"$INSTANCE_POOL_COMPUTE\") and
(.[\"lifecycle-state\"]==\"RUNNING\")
).id")
oci compute-management instance-pool update --instance-pool-id $INSTANCE_POOL_ID_CMP --size 2
Export the kubeconfig:
export KUBECONFIG=$INSTALL_DIR/auth/kubeconfig
oc logs -f daemonset.apps/oci-cloud-controller-manager -n oci-cloud-controller-manager
Example output:
I0816 04:22:12.019529 1 node_controller.go:484] Successfully initialized node inst-rdlw6-demo-oci-003-controlplane.priv.ocp.oraclevcn.com with cloud provider
oc get nodes
oc get all -n oci-cloud-controller-manager
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Check the pending certificates using oc get csr -w
, then approve those by running:
oc adm certificate approve $(oc get csr -o json | jq -r '.items[] | select(.status.certificate == null).metadata.name')
Observe the nodes joining in the cluster by running: oc get nodes -w
.
Check if you can remove the bootstrap instance when the control plane
nodes have been up and running correctly. You can check by running
the following command:
openshift-install --dir $INSTALL_DIR wait-for bootstrap-complete
Example output:
INFO It is now safe to remove the bootstrap resources
INFO Time elapsed: 1s
It is also possible to wait for the installation to complete by using the
openshift-install
binary:
openshift-install --dir $INSTALL_DIR wait-for install-complete
Example output:
$ openshift-install --dir $INSTALL_DIR wait-for install-complete
INFO Waiting up to 40m0s (until 6:17PM -03) for the cluster at https://api.oci-ext00.mydomain.com:6443 to initialize...
INFO Checking to see if there is a route at openshift-console/console...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/me/oci/oci-ext00/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.oci-ext00.mydomain
INFO Login to the console with user: "kubeadmin", and password: "[super secret]"
INFO Time elapsed: 2s
Alternatively, you can watch the cluster operators to follow the installation process:
watch -n5 oc get clusteroperators
The cluster will be ready to use once the operators are stabilized.
If you have issues, you can start exploring the Throubleshooting Installations page.
This section provides a single script to clean up the resources created by this user guide.
Run the following command to delete the resource considering the dependencies:
# Compute
## Clean up instances
oci compute instance terminate --force \
--instance-id $INSTANCE_ID_BOOTSTRAP
oci compute-management instance-pool terminate --force \
--instance-pool-id $INSTANCE_POOL_ID_CMP \
--wait-for-state TERMINATED
oci compute-management instance-configuration delete --force \
--instance-configuration-id $INSTANCE_CONFIG_ID_CMP
oci compute-management instance-pool terminate --force \
--instance-pool-id $INSTANCE_POOL_ID_CPL \
--wait-for-state TERMINATED
oci compute-management instance-configuration delete --force \
--instance-configuration-id $INSTANCE_CONFIG_ID_CPL
## Custom image
oci compute image delete --force --image-id ${IMAGE_ID}
# IAM
## Remove policy
oci iam policy delete --force \
--policy-id $(oci iam policy list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--name $POLICY_NAME | jq -r .data[0].id) \
--wait-for-state DELETED
## Remove dynamic group
oci iam dynamic-group delete --force \
--dynamic-group-id $(oci iam dynamic-group list \
--name $DYNAMIC_GROUP_NAME | jq -r .data[0].id) \
--wait-for-state DELETED
## Remove tag namespace and key
oci iam tag-namespace retire --tag-namespace-id $TAG_NAMESPACE_ID
oci iam tag-namespace cascade-delete \
--tag-namespace-id $TAG_NAMESPACE_ID \
--wait-for-state SUCCEEDED
## Bucket
for RES_ID in $(oci os preauth-request list --bucket-name "$BUCKET_NAME" | jq -r .data[].id); do
echo "Deleting Preauth request $RES_ID"
oci os preauth-request delete --force \
--bucket-name "$BUCKET_NAME" \
--par-id "${RES_ID}";
done
oci os object delete --force \
--bucket-name "$BUCKET_NAME" \
--object-name "images/${IMAGE_NAME}"
oci os object delete --force \
--bucket-name "$BUCKET_NAME" \
--object-name "bootstrap-${CLUSTER_NAME}.ign"
oci os bucket delete --force \
--bucket-name "$BUCKET_NAME"
# Load Balancer
oci nlb network-load-balancer delete --force \
--network-load-balancer-id $NLB_ID \
--wait-for-state SUCCEEDED
# Network and dependencies
for RES_ID in $(oci network subnet list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--vcn-id $VCN_ID | jq -r .data[].id); do
echo "Deleting Subnet $RES_ID"
oci network subnet delete --force \
--subnet-id $RES_ID \
--wait-for-state TERMINATED;
done
for RES_ID in $(oci network nsg list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--vcn-id $VCN_ID | jq -r .data[].id); do
echo "Deleting NSG $RES_ID"
oci network nsg delete --force \
--nsg-id $RES_ID \
--wait-for-state TERMINATED;
done
for RES_ID in $(oci network security-list list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--vcn-id $VCN_ID \
| jq -r '.data[] | select(.["display-name"] | startswith("Default") | not).id'); do
echo "Deleting SecList $RES_ID"
oci network security-list delete --force \
--security-list-id $RES_ID \
--wait-for-state TERMINATED;
done
oci network route-table delete --force \
--wait-for-state TERMINATED \
--rt-id $(oci network route-table list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--vcn-id $VCN_ID \
| jq -r '.data[] | select(.["display-name"] | endswith("rtb-public")).id')
oci network route-table delete --force \
--wait-for-state TERMINATED \
--rt-id $(oci network route-table list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--vcn-id $VCN_ID \
| jq -r '.data[] | select(.["display-name"] | endswith("rtb-private")).id')
for RES_ID in $(oci network nat-gateway list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--vcn-id $VCN_ID | jq -r .data[].id); do
echo "Deleting NATGW $RES_ID"
oci network nat-gateway delete --force \
--nat-gateway-id $RES_ID \
--wait-for-state TERMINATED;
done
for RES_ID in $(oci network internet-gateway list \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--vcn-id $VCN_ID | jq -r .data[].id); do
echo "Deleting IGW $RES_ID"
oci network internet-gateway delete --force \
--ig-id $RES_ID \
--wait-for-state TERMINATED;
done
oci network vcn delete --force \
--vcn-id $VCN_ID \
--wait-for-state TERMINATED
# Compartment
oci iam compartment delete --force \
--compartment-id $COMPARTMENT_ID_OPENSHIFT \
--wait-for-state SUCCEEDED
This guide walked through an OpenShift deployment on Oracle Cloud Infrastructure, a non-integrated provider, using the feature Platform External introduced in 4.14. The feature allows an initial integration with OCI without needing to change the OpenShift code base.
It will also open the possibility to quickly deploy cloud provider components natively, like CSI drivers, which mostly require extra setup with CCM.