John Call

@johnsimcall

Joined on Oct 25, 2021

  • TODO - explain why I'm writing this. In short, I'm writing this in order to provide additional context to the official documentation. Planning Doc - 8.2 Multi network plug-in (Multus) support Managing Doc - 7 Creating Multus networks TODO - Add a visual diagram that shows overlapping and non-overlapping CIDRs/routes :::info Ian mentioned today that it may be possible to simplify the host configuration (NNCP) by setting the netmask to /17 instead of /24. This would eliminiate the need for a static route, when the smaller Node network is a part of the larger Pod network :::
     Like  Bookmark
  • Goal Use OpenShift Data Foundations (Ceph) to present two highly-available StorageClasses one StorageClass for high-speed SSD-based PersistentVolumes and another StorageClass for slow HDD-based PersistentVolumes Please note: I want two distinct StorageClasses instead of trying to create one StorageClass that does automatic tiering/caching. Assumptions This document only talks about deploying OpenShift Data Foundation (ODF) and the Local Storage Operator (LSO). I've provided some images that illustrate my hardware and their networks, but creating a 3-node "compact" OpenShift cluster on baremetal and configuring the network interfaces is out of scope. Creating Pods and VirtualMachines to consume the storage is also better described elsewhere.
     Like 1 Bookmark
  • Red Hat's OpenShift Container Platform is a powerful way to run legacy virtual machine workloads and modern container-based / Kubernetes-based workloads. The first step to unleashing the power of OpenShift is to allow developers and system administrators to log in. OpenShift allows users to login through several mechanisms, inluding OAuth-based single-sign-on, integrations with GitHub and GitLab, locally-administered usernames & passwords, and via Active Directory. The list of identity providers (IdP) OpenShift supports can be found in the official documentation. Upfront acknowledgement: Many of the examples I provide below were based on the excellent information I reviewed from others that came before me. My favorite examples are found at the https://examples.openshift.pub website. I augemented those examples with additional information I borrowed from RFC2255. OpenShift uses RFC2255-formated URIs when authenticating users with Active Directory.
     Like 1 Bookmark
  • hackmd-github-sync-badge A brief note here about setting up the physical and virtual networking so that OpenShift can connect VMs directly to their appropriate network segment / VLAN. This is documented in the official docs as well... If you don't setup VM networks, the VMs are required to use the default "Pod network" which makes SSH and RDP connections awkward. :::info The blue network (image below) is setup by default with every deployment of OpenShift. My notes below are related to setting up the green network(s). :::
     Like 2 Bookmark
  • Overview Installing OpenShift, by default, requires a direct connection from your servers to the OpenShift content hosted at the Red Hat-managed quay.io website. Installing OpenShift with "network restrictions" is also possible. Examples of network restrictions could be firewall policies, security policies, and/or using a fully disconnected or air-gapped network. Creating a fully disconnected mirror is one way to solve the network restrictions. But using a proxy is my preferred way to deal with network restrictions. Using a proxy, instead of creating a mirror, makes it very easy to update the OpenShift platform and add additional features as soon as they're needed. However, when OpenShift is completely disconnected, the ability to add new features and update OpenShift itself depends on those features and updates being available in the mirror. :::info Creating a disconnected mirror is much easier using the new mirror-registry service and oc-mirror command. I highly recommend using those tools instead of the former oc adm release mirror ... procedure. ::: In my experience, many people avoid using a proxy-based installation because they don't already have a proxy setup, or need to guarantee that the proxy doesn't allow access to undesireable websites. The rest of this post will illustrate the few simple commands required to setup, and secure, a proxy service on Linux using Squid. We'll secure the proxy by configuring it to only allow connections to certain websites, like quay.io and redhat.com.
     Like  Bookmark
  • I setup my Lenovo ThinkPad X1 Carbon Gen 9 laptop to dual-boot between Windows and Linux. I recently had my laptop's motherboard / systemboard replaced, which wiped out the ability to boot into Linux. I had to use efibootmgr to create a new boot entry because the laptop's UEFI/BIOS screens don't allow me to do it (many server BIOS/UEFI allow creating boot entries though.) Procedure Boot the laptop into a rescue environment Query the variables Create a new boot entry Boot into a rescue environment The easiest way to do this is to use a Live ISO. I used a PXE server in my case. Either way, I append inst.rescue and inst.sshd for convenience.
     Like  Bookmark
  • hackmd-github-sync-badge I created a simple bash script to help me shutdown my hyperconverged 3-node OpenShift Container Platform (OCP) cluster running Virtual Machines with OpenShift Data Foundation (ODF.) I need this script because following the OCP shutdown instructions results in a hung / stalled system. The hang occurs because of a race condition between shutting down ODF and the applications using ODF. My simplified script orders the shutdown of VMs & Pods before finally shutting down ODF & OCP. :::info Update: Big thanks to Jason Kincl who provided the json queries to discover ALL workloads using ODF storage. His approach is more efficient than the individual shutdown sections I used previously, and which didn't account for additional workloads using ODF. ::: The basic workflow is:
     Like  Bookmark
  • Or, in other words, how to send arbitrary key strokes to a Virtual Machine running on OpenShift Virtualization. oc login --username john https://api.my-openshift.example.com:6443 oc project john Now using project "john" on server "api.my-openshift.example.com:6443". oc get virtualmachines,pods NAME STATUS virtualmachine.kubevirt.io/rhel9-with-gpu Stopped
     Like  Bookmark
  • :::info Please note, this procedure is also available via the GUI ::: Connecting an NFS server, such as a common Linux server, to OpenShift can be a convenient way to provide storage for multi-pod (ReadWriteMany/RWX) Persistent Volumes. Fancy NFS servers, like NetApp or Dell/EMC Isilon, have their own Container Storage Interface (CSI) drivers and shouldn't use the csi-driver-nfs described in this document. Using the appropriate CSI driver for your storage provides additional features like efficient snapshots and clones. The goal of this document is to describe how to install the csi-driver-nfs provisioner onto OpenShift. The procedure can be performed using the Web Console, but I'll document the Helm/command-line method here because it's easier to copy and paste. The process of configuring a Linux server to export NFS shares is described in the Appendix below.
     Like 2 Bookmark
  • storage vlan --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: control-plane101-v201 spec: nodeSelector: kubernetes.io/hostname: control-plane101.ocp-dev-a.ocp.ct.edu ### Change this desiredState:
     Like  Bookmark
  • What is a pull secret? A pull secret is machine-readable text (JSON-formated) that combines your username, a password/token, and the server/registry that accepts those credentials. For example: user=john.call@redhat.com, password=top-secret, server=download.redhat.com I wrote a bit more about this in my disconnected lab document. Where can I find my pull secret? Your pull secret can be downloaded from the OpenShift Resources section of Red Hat's Hybrid Cloud Console. This link will ask you to login, and take you directly to your pull secret. https://console.redhat.com/openshift/install/pull-secret
     Like  Bookmark
  • :::success Click here to jump straight to the commands ::: OpenShift is secure by default. The installation process creates two self-signed certificates that are used to: 1) encrypt and secure the connection between users and the applications running on OpenShift 2) encrypt and secure the API endpoint used for command-line interactions (e.g. oc apply -f myapp.yaml and kubectl apply -f myapp.yaml). Applications hosted on OpenShift, including the Web Console, can use the Router / Ingress Controller's wildcard certificate for security and encryption. The wildcard certificate allows your applications to be hosted under the *.apps.example.com domain. But users will most likely be uncomfortable clicking through the browser prompts asking them to trust the default self-signed certificate. The self-signed API and wildcard certificates can be easily replaced with certificates your users will automatically trust. I've abbreviated the official OpenShift documentation into the commands below to help you quickly understand how this is done.
     Like  Bookmark
  • More info at https://hackmd.io/@johnsimcall/H1DCTW230 OpenShift Data Foundations via dedicated 10Gb Each Ceph service (mon, osd, etc...) gets an IP address from here # https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/network-requirements_rhodf#multus-examples_rhodf # Node network: 192.168.255.0/24 (exclude from NetworkAttachmentDefintion's whereabouts) # Pod network: 192.168.128.0/17 (see NetworkAttachmentDefinition) # JCALL - the block below doesn't declare any static routes. I assume this is when the node network is within the pod/public/multus network
     Like  Bookmark
  • Several of my customers have recently asked about installing OpenShift on a single node that has multiple SSD / NVMe storage drives. These are my notes about combining the drives using Linux' software RAID (mdadm) and then dynamically allocating slices of the big RAID array via Logical Volumes (LVM). mdadm is the Multi-Device Administration (CLI) tool. LVM is the Logical Volume Manager toolset. :::info I describe using mdadm because neither the LVMStorage Operator nor the topolvm project use LVM's native RAID capabilities. ::: Step 1 - Create a RAID array
     Like  Bookmark
  • I've heard several customers and home-lab enthusiasts ask if they can install OpenShift on a single node with a single NVMe / SSD drive. This must be done during installation, before CoreOS expands it's rootfs and consumes all of the unused space. Slower HDDs shouldn't be used because etcd needs faster storage. In other words, in order to restrict the CoreOS root "/" partition from growing to consume the entire drive, just create a new partition during installation with a MachineConfig YAML manifest. LVMStorage can use this empty partition after the installation completes to dynamically allocate storage to VMs and Pods/containers. :::info Maybe HPP (hostpath-provisioner) is what I should have documented... But that method also encourages a separate partition - https://github.com/kubevirt/hostpath-provisioner WARNING If you select a directory that shares space with your Operating System, you can potentially exhaust the space on that partition and your node will become non-functional. It is recommended you create a separate partition and point the hostpath provisioner there so it will not interfere with your Operating System. :::
     Like 2 Bookmark
  • Considerations: Service (Internal) -vs- Route (External) https://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc Noobaa's MCG (Multi Cloud Gateway) -vs- Ceph's RGW (RADOS Gateway) http://ocs-storagecluster-cephobjectstore-openshift-storage.apps.example.com :::danger Don't use this method because the nodes/kubelets can't resolve the .svc address 😔 :::
     Like  Bookmark
  • hackmd-github-sync-badge Acknowledgements I got some good ideas from these authors: https://nvmexpress.org/resource/nvme-namespaces/ https://www.drewthorst.com/posts/nvme/namespaces/readme/ (Drew, please renew your TLS certificate! :) https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html https://www.linkedin.com/pulse/linux-nvme-cli-cheat-sheet-frank-ober/
     Like  Bookmark
  • This document will document the prerequisite steps for connecting OpenShift to an Infinidat Fibre Channel storage array (Infinibox???) Documentation - Configuring RHEL 8 & RHEL 9 for Infinidat https://support.infinidat.com/hc/en-us/articles/11985136447005-Setting-up-hosts-for-FC-on-RHEL-8-or-above-and-alternatives Documentation - Installing the Infinidat CSI driver https://support.infinidat.com/hc/en-us/articles/10106070174749-InfiniBox-CSI-Driver-for-Kubernetes-User-Guide Example MachineConfig https://access.redhat.com/solutions/7002456
     Like 1 Bookmark
  • Making an offline installation bundle for OpenShift requires mirroring/downloading the container images and then hosting those container images in a container registry that is accessible by the cluster nodes. The download process can put the container images into the local filesystem or upload them directly into the container registry. (a USB stick, a directory that will be burnt to a DVD, or a folder that will be uploaded into S3 or similar storage) :::info A minimal download of OpenShift 4.12 requires ~15GB of space A minimal download of OpenShift Platform Plus requires ~50GB of space If you're using mounted storage, consider setting --quayRoot to a subdirectory of the mountpoint. Uninstalling the mirror registry will fail otherwise. ::: Lots of good information in this blog - Mirroring OpenShift Registries: The Easy Way by Ben Schmaus and Daniel Messer (August 23, 2022).
     Like 2 Bookmark
  • :::spoiler Last Update: OCP 4.11 in Dec 2022 This guide is written and intended for use from Linux systems, but could easily be adapted for use by Windows or MacOS environments. ::: Go here to request an environment https://techzone.ibm.com/collection/on-premises-redhat-openshift-on-power-and-ibm-z-offerings Do this to setup your VPN nmcli connection add type vpn connection.id ibm-ENV-ID vpn-type openconnect vpn.data gateway=asa003b.centers.ihost.com ipv4.dns-search ihost.com
     Like 1 Bookmark