or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Do you want to remove this version name and description?
Syncing
xxxxxxxxxx
TODO: title of blog
Section 1. Setup OpenShift configuration
Create the install-config.yaml
Create the manifests
Customize manifest for CCM (optional)
Patch the Infrastructure Object
Create MachineConfig for Kubelet Provider ID
Create ignition files
Section 2. Create Infrastructure resources
Identity
Network
DNS
Load Balancers
Create compute nodes
Upload the RHCOS image
Bootstrap
Control Plane
Compute/workers
Section 3. Deploy Cloud Controller Manager (CCM)
Review the installation
Use case of installing a cluster with external platform type in Oracle Cloud Infrastructure
This use case provides details of how to deploy an OpenShift cluster using external
platform type in Oracle Cloud Infrastructure (OCI), deploying providers' Cloud Controller
Manager (CCM).
The guide derives from "Installing a cluster on any platform" documentation, adapted to the external platform type.
The steps provide low-level details to customize Oracle's components like CCM.
This guide is organized into three sections:
If you are exploring how to customize the OpenShift platform external type with CCM, without
deploying the whole cluster creation in OCI, feel free to jump to
Section 2
.Section 1 and 3 are mostly OCI-specific, and it is valuable for readers exploring
in detail the OCI manual deployment.
!!! tip "Automation options"
The goal of this document is to provide details of the platform external type,
without focusing on the infrastructure automation. The tool used to
provision the resources described in this guide is the Oracle Cloud CLI.
!!! danger "Unsupported Document"
This guide is created only for Red Hat partners or providers aiming to extend
external components in OpenShift, and should not be used as an official or
supported OpenShift installation method.
Table of Contents
Prerequisites
Clients
OpenShift clients
Download the OpenShift CLI and installer:
!!! tip "Credentials"
The Red Hat Cloud credential (Pull secret)
is required to pull from the repository
quay.io/openshift-release-dev/ocp-release
.!!! warning "Supported OpenShift versions"
The Platform External is available in OpenShift 4.14+.
Move the binaries
openshift-install
andoc
to any directory exported in the$PATH
.OCI Command Line Interface
The OCI CLI is used in this guide to create infrastructure resources in the OCI.
Utilities
Download jq: used to filter the results returned by CLI
Download yq: used to patch the
yaml
manifests.Setup the Provider Account
A user with administrator access was used to create the OpenShift cluster described
in this use case.
The cluster was created in a dedicated compartment in Oracle Cloud Infrastructure,
it allows the creation of custom policies for components like Cloud Controller Manager.
The following steps describe how to create an compartment from any nested level,
and create predefined tags used to apply policies to the compartment:
${PARENT_COMPARTMENT_ID}
:Section 1. Create Infrastructure resources
Identity
There are two methods to provide authentication to Cloud Controller Manager to access the Cloud API:
The steps described in this document are using Instance Principals.
Instance principals require extra steps to grant permissions to the Instances to access
the APIs. The steps below describe how to create the namespace tags, used in the
Dynamic Group rule filtering only the Control Plane nodes to take actions defined in the
compartment's Policy.
Steps:
demo-${CLUSTER_NAME}-controlplane
with the following rule:$DYNAMIC_GROUP_NAME
access resources in the cluster compartment (
$COMPARTMENT_NAME_OPENSHIFT
):!!! tip "Helper"
OCI CLI documentation for
oci iam policy create
Network
The OCI VCN (Virtual Cloud Network) must be created using the Networking requirements for user-provisioned infrastructure.
!!! tip "Info"
The resource name provided in this guide is not a standard but follows
a similar naming convention created by the installer in the supported cloud
providers. The names will also be used in future sections to discover resources.
Create the VCN and dependencies with the following configuration:
${CLUSTER_NAME}-vcn
${CLUSTER_NAME}-net-public
${CLUSTER_NAME}-net-private
${CLUSTER_NAME}-igw
${CLUSTER_NAME}-natgw
${CLUSTER_NAME}-rtb-public
0/0
toigw
${CLUSTER_NAME}-rtb-private
0/0
tonatgw
${CLUSTER_NAME}-nsg-nlb
${CLUSTER_NAME}-nsg-controlplane
${CLUSTER_NAME}-nsg-compute
Steps:
Load Balancer
Steps to create the OCI Network Load Balancer (NLB) to the cluster.
A single NLB is created with listeners to Kubernetes API Server, Machine
Config Server (MCS) and Ingress for HTTP and HTTPS. The MCS is
the only one with internal access.
The following resources will be created in the NLB:
${CLUSTER_NAME}-api
/readyz
/10/3${CLUSTER_NAME}-mcs
/healthz
/10/3${CLUSTER_NAME}-http
${CLUSTER_NAME}-https
${CLUSTER_NAME}-api
${CLUSTER_NAME}-api
${CLUSTER_NAME}-mcs
${CLUSTER_NAME}-mcs
${CLUSTER_NAME}-http
${CLUSTER_NAME}-http
${CLUSTER_NAME}-https
${CLUSTER_NAME}-https
Steps:
DNS
Steps to create the resource records pointing to the API address (public and private), and
to the default router.
The following DNS records will be created:
${CLUSTER_NAME}
.${BASE_DOMAIN}
${CLUSTER_NAME}
.${BASE_DOMAIN}
${CLUSTER_NAME}
.${BASE_DOMAIN}
!!! tip "Helper"
It's not required to have a publicly accessible API and DNS domain, alternatively, you can use a bastion host to access the private API endpoint.
Steps:
Section 2. Preparing the installation
This section describes how to set up OpenShift to customize the manifests
used in the installation.
Create the installer configuration
Modify and export the variables used to build the
install-config.yaml
andthe later steps:
Create install-config.yaml
Create the
install-config.yaml
setting the platform type toexternal
:Create manifests
Create manifests for OCI Cloud Controller Manager
The steps in this section describe how to customize the OpenShift installation
providing the Cloud Controller Manager manifests to be added in the bootstrap process.
!!! warning "Info"
This guide is based on the OCI CCM v1.26.0. You must read the
project documentation
for more information.
Steps:
!!! danger "Important"
Red Hat does not recommend creating resources in namespaces prefixed with
kube-*
and
openshift-*
.and save it in the directory
${INSTALL_DIR}/manifests
:ServiceAccount
:ServiceAccount
:and add env vars for the kube API URL used in OpenShift:
The following CCM manifest files must be created in the installation
manifests/
directory:Create custom manifests for Kubelet
The Kubelet parameter
providerID
is the unique identifier of the instance in OCI.It must be set before the node is initialized by CCM using a custom MachineConfig.
The Provider ID must be set dynamically for each node. The steps below describe
how to create a MachineConfig object to create a systemd unit to create a kubelet
configuration discovering the Provider ID in OCI by querying the
Instance Metadata Service (IMDS).
Steps:
MachineConfig
objects:The MachineConfig files must exist:
Create ignition files
Once the manifests are placed, you can create the cluster ignition configurations:
The ignition files must be generated in the install directory (files with extension
*.ign
):Section 3. Create the cluster
The first part of this section describes how to create the compute nodes and dependencies,
once the instances are provisioned, the bootstrap will initialize the control plane, and then
when control plane nodes join the cluster, are initialized by CCM, and the control plane
workloads scheduled, the bootstrap will be completed.
The second part describes how to approve the CSR for worker nodes, and to review
the cluster installation.
Cluster nodes
Every node role uses different ignition files. The following table shows which
ignition file is required for each node role:
${PWD}/user-data-bootstrap.json
${INSTALL_DIR}/master.json
${INSTALL_DIR}/worker.json
Run the following commands to populate the values for the environment variables required to create instances:
IMAGE_ID
: Custom RHCOS image previously uploaded.SUBNET_ID_PUBLIC
: Public regional subnet used in bootstrap.SUBNET_ID_PRIVATE
: Private regional subnet used to create control plane and compute nodes.NSG_ID_CPL
: Network Security Group ID used in Control Planes!!! warning "Check if required variables have values before proceeding"
cat <<EOF>/dev/stdout COMPARTMENT_ID_OPENSHIFT=$COMPARTMENT_ID_OPENSHIFT SUBNET_ID_PUBLIC=$SUBNET_ID_PUBLIC SUBNET_ID_PRIVATE=$SUBNET_ID_PRIVATE NSG_ID_CPL=$NSG_ID_CPL EOF
!!! tip "Helper - OCI CLI documentation"
-
oci compute image list
-
oci network subnet list
-
oci network nsg list
Upload the RHCOS image
The image used in this guide is QCOW2. The
openshift-install
commandprovides the option
coreos print-stream-json
to show all the availableartifacts. The steps below describe how to download the image, upload it to
an OCI bucket, and then create a custom image.
QCOW2
image:!!! tip "Helper - OCI CLI documentation"
-
oci os bucket create
!!! tip "Helper - OCI CLI documentation"
-
oci os object put
!!! tip "Helper"
OCI CLI documentation for
oci compute image import
Bootstrap
The bootstrap node is responsible for creating the temporary control plane and serve
the ignition files to other nodes through the MCS.
The OCI user data has a size limitation that prevents to use of the bootstrap
ignition file directly when launching the node. A new ignition file will be
created replacing it with a remote URL fetching from the temporary Bucket Object URL.
Once the bootstrap instance is created, it must be attached to the load balancer in the
Backend Sets of Kubernetes API Server and Machine Config Server.
Steps:
bootstrap.ign
to the infrastructure bucket!!! tip "Helper"
OCI CLI documentation for
oci os object put
!!! warning "Attention"
The bucket object URL will expire in one hour if you are planning to create
the bootstrap later, please adjust the
$EXPIRES_TIME
.!!! tip "Helper"
OCI CLI documentation for
oci os preauth-request create
The generated URL for the ignition file
bootstrap.ign
must be available in the$IGN_BOOTSTRAP_URL
.!!! tip "Helper - OCI CLI documentation"
-
oci compute instance launch
-
oci compute shape list
!!! tip "Follow the bootstrap process"
You can SSH to the node and follow the bootstrap process:
journalctl -b -f -u release-image.service -u bootkube.service
!!! tip "Helper - OCI CLI documentation"
-
oci nlb network-load-balancer list
-
oci nlb backend-set list
-
oci compute instance list
!!! tip "Helper - OCI CLI documentation"
-
oci nlb backend-set update
Control Plane
Three control plane instances will be created. The instances is created using
Instance Pool, which will automatically inherit the same configuration and attach
to the required listeners: API and MCS.
!!! tip "Helper - OCI CLI documentation"
-
oci compute-management instance-pool create
--size
can be adjusted when creating the Instance Pool):!!! tip "Helper - OCI CLI documentation"
-
oci compute-management instance-pool update
Compute/workers
Review the installation
Export the kubeconfig:
OCI Cloud Controller Manager
Example output:
Approve certificates for compute nodes
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Check the pending certificates using
oc get csr -w
, then approve those by running:Observe the nodes joining in the cluster by running:
oc get nodes -w
.Wait for Bootstrap to complete
Check if you can remove the bootstrap instance when the control plane
nodes have been up and running correctly. You can check by running
the following command:
Example output:
Check installation complete
It is also possible to wait for the installation to complete by using the
openshift-install
binary:Example output:
Alternatively, you can watch the cluster operators to follow the installation process:
The cluster will be ready to use once the operators are stabilized.
If you have issues, you can start exploring the Throubleshooting Installations page.
Destroy the cluster
This section provides a single script to clean up the resources created by this user guide.
Run the following command to delete the resource considering the dependencies:
Summary
This guide walked through an OpenShift deployment on Oracle Cloud Infrastructure, a non-integrated provider, using the feature Platform External introduced in 4.14. The feature allows an initial integration with OCI without needing to change the OpenShift code base.
It will also open the possibility to quickly deploy cloud provider components natively, like CSI drivers, which mostly require extra setup with CCM.
Next steps