# **OpenShift CCN Roadshow OPS Track Workshop**
---
## Provisioning Guide
### Generic RHPDS Account
Because of the way that quotas are implemented in RHPDS, you must request a generic account in advance of provisioning. Please send an email to rhpds-help@redhat.com
Subject: Generic Provisioner Account Request
Body:
I will be conducting a workshop using the OCP and Container Storage for Admins multi-provisioning catalog item. Please provide me with a generic provisioner account.
https://www.opentlc.com/keys/OCP_and_Container_Storage_for_Admins_Provisioning.pdf
### RHPDS
1. Log into https://rhpds.redhat.com using the generic account that you were given
2. The catalog item is Workshops -> **OCP and Container Storage for Admins** : select it
and click Order
3. Fill out the dialog fields
a. Make sure to enter the SFDC opportunity, campaign ID, or partner registration
information
b. Choose a name for the cluster that makes sense. If you are doing a workshop for
Acme Corp, perhaps use “acme”, or if they are in Paris use “paris”. NOTE: 8
characters max - might want to use an airport code like CDG
4. Enter desired number of users (dedicated instances) to deploy
This is the number of participants you expect to have, plus a few spares. As an example,
if you expect to have 15 people, pad a little and use 17.
5. Check all the other fields.
6. Submit the form.
### Lab Guide
#### To customer:
http://ocp-ocs-admins-labguides.6923.rh-us-east-1.openshiftapps.com/workshop/environment
#### Source in Github
https://github.com/openshift/openshift-cns-testdrive/tree/ocp4-prod/workshop/content
#### RHPDS email
Your Red Hat Product Demo System OCP4 and Container Storage for Admins GuidGrabber deployment has completed. All of your environments should now be available for your workshop.
The following activation key has been automatically generated:
**02b40**
You must share this key with your users. You can change the activation key and other settings in the GuidGrabber Manager application.
The GuidGrabber Manager application is at https://www.opentlc.com/gg/manager.cgi Log in using your OPENTLC user wang-redhat.com.
Your generated lab code is 1467. This cannot be changed in the manager.
The GuidGrabber client is available to your users at https://www.opentlc.com/gg/gg.cgi?profile=wang-redhat.com
The pool of available environments in GuidGrabber will fill up automatically as the environment builds are completed. Check the "Manage" link the the GuidGrabber manager application to see your pool status.
If you wish to order more services in CloudForms select the root service then click Multi Service -> Order More. Please follow the instructions that will be emailed to you when clicking this button to update new GUIDs when they be come available.
#### Share to attendees
https://www.opentlc.com/gg/gg.cgi?profile=wang-redhat.com
#### Open GuidGrabber
Welcome to: OCP4 and Container Storage for Admins
Your assigned lab GUID is
hongkong-a265
Let's get started! Please read these instructions carefully before starting to have the best lab experience:
Save the above GUID as you will need it to access your lab's systems from your workstation.
Consult the lab instructions before attempting to connect to the lab environment.
Lab instructions:
https://dashboard-lab-ocp-cns.apps.cluster-hongkong-a265.hongkong-a265.sandbox118.opentlc.com
The following URLs and information will be used in your lab environment. Please only access these links when the lab instructions specify to do so:
**Username: kubeadmin
Password: yFsUg-tCber-7I3ee-ca3Sw**
Note: The lab instructions may specify other host names and/or URLs.
WARNING: You should only click FORGET SESSION if requested to do so by a lab attendant.
#### Error
Your service provision request for RHPDS-DEM-wang-redhat.com-1467-hongkong-60c7 failed when provisioning.
The problem is happening in step **checkSoftwareDeploy**
The deployment failed while deploying the lab software. This could be due to a bug in the lab playbook or its taking too long to run. If you know the demo/lab owner you may want to let them know it is failing.
The administrators have been alerted of this problem and will look into it as soon as possible. In the meantime you can attempt to order this service again. If this problem persists please contact OPENTLC/RHPDS Admins at rhpds-help@redhat.com
---
## General Remarks
* You can get the web console link oc command, the login is kubeadmin
` oc get route -n openshift-console`
* All the yaml files are in /opt/app-root/src/support/ folder
---
## Environment Overview
### Lab Overview
* Explore the OpenShift environment
* Use "oc" command
### Keys Take Aways
* Use "ServiceAccounts" for non-human user accounts
* Use "oc get *resource* -o yaml" to output content in yaml format
* "Cluster-admin" is similar to root account of OpenShift
### Lab Dependency
* NA
---
## Installation and Verification
### Lab Overview
* Explore OpenShift cluster components
* Check ~/cluster-$GUID/.openshift_install.log
### Keys Take Aways
* "installer-provisioned infrastructure" (IPI) is the easiest OpenShift 4 installation method
* OpenShift Data Store (etcd) stores the persistent master state
* "scheduler" is responsible for determining placement of new pods
* Liveness probes tell the system if the pod is healthy or not
* Readiness probes tell the system when the pod is ready to take traffic
### Lab Dependency
* NA
---
## Application Management Basics
### Lab Overview
* Create project
* Deploy application
* Understand "service", "ReplicationController", "DeploymentConfiguration"
* Scale out an application pod
* Understand "route"
* Add liveness and readiness probes to the application
### Keys Take Aways
* oc new-project, oc new-app, oc get pod
* Service act as an internal proxy/load balancer between Pods
* ReplicationControllers (RC) ensure the desired number of Pods
* DeploymentConfiguration (DC) defines how something in OpenShift should be deployed
* External clients access applications through "route"
### Lab Dependency
* NA
---
## Application Storage Basics
### Lab Overview
* Add Persistent Volume Claim (PVC) to application
* Delete pod to check whether Persistent volume (PV) persist
### Keys Take Aways
* Persistent volume (PV) persists when the pod dies. It typically comes from an external storage system
* Dynamic provisioning create PV that match with PVC automatically
* EBS support dynamic provisioning
### Lab Dependency
* Application Management Basics
---
## MachineSets, Machines, and Nodes
### Lab Overview
* Scale out machine using machineset
### Keys Take Aways
* MachineSet defines a desired state for a set of Machine
### Lab Dependency
* NA
---
## Infrastructure Nodes and Operators
### Lab Overview
* Create a new machineset for infra node
* Migrate OpenShift router, image-registry, monitoring pods from worker node to infra node
* It takes time to provision new infra nodes
### Keys Take Aways
* The roles of infra node (logging, router, monitoring...etc)
* "metadata" describes the information of the machineset itself
* "spec.template.spec.metadata.labels" add label to the newly created node
* At least 3 machinesets across different AZ
* Using nodeSelector, set node-role.kubernetes.io/infra: "" to migrate the service to infra node
### Lab Dependency
NA
---
## Deploying and Managing OpenShift Container Storage
### Lab Overview
* Create a new machineset for OCS
* Wait until the storage nodes are ready before installing OCS operator (around 5 mins). You can use the web console to monitor the status
* Use Operator to install OCS
* Install Rook-ceph toolbox to check Ceph status
* Deploy application using Ceph RBD as RWO PV
* Deploy application using CephFS as RWX PV
* Migrate cluster monitoring to Ceph RBD
* Using Multi-Cloud Gateway; Create Object Bucket Claim
* Add storage capacity
* OCS monitoing
### Keys Take Aways
* OpenShift Container Storage (OCS) provides persistent volume (PV) for OCP workloads
* The raw capacity will be three times than the usable capacity because OCS uses a replica count of 3
* For RWX (ReadWriteMany) volume, OpenShift attempt to attach multiple pods to the same PV.
* If you attempt to scale up deployments that are using RWO (ReadWriteOnce) storage, the Pods will actually all become co-located on the same node.
* By default, OpenShift monitoring (Prometheus and AlertManager) use ephemeral storage. Persisting both is best-practice since data loss on either of these will cause you to lose your collected metrics and alerting data.
* Introduction of Multi-Cloud Gateway: https://www.openshift.com/blog/introducing-multi-cloud-object-gateway-for-openshift
### Lab Dependency
* Infrastructure Nodes and Operators - require infra node
---
### OpenShift Log Aggregation
### Lab Overview
* Make sure "openshift-logging" namespace exist
* Install Elastericsearch Operator (select Channel 4.2)
* Install Cluster Logging Operator (select Channel 4.2)
* Refresh browser if 404 error in step "Custom Resource Definition Overview page, select View Instances from the Actions menu"
* Replace "storageClassName: ocs-storagecluster-ceph-rbd" with "storageClassName: gp2" in the openshift_logging_cr.yaml
* This lab focus on EFK installation, not how to use EFK
### Keys Take Aways
* CustomResourceDefinion (CRD) is generic pre-defined structures of data.
* The operator apply the data defined by the CRD
* Similar to a class
* CustomResource (CR) is an actual implementations of the CRD
* Operator will use the value when configuring it’s service
* Similar to an instantiated object of the class
* General Pattern: install the Operator > create CRDs > create the CR
* So that operator know how to act, what to install, and/or what to configure
### Lab Dependency
* Infrastructure Nodes and Operators - require infra node
* Deploying and Managing OpenShift Container Storage - require ocs-storagecluster-ceph-rbd
---
## External (LDAP) Authentication Providers, Users, and Groups
### Lab Overview
* Configure OpenShift to use IDM LDAP for user authentication
* Run "groupsync" to sync groups and users to OpenShift
* Apply cluster role (cluster-reader) to group (ose-fancy-dev)
* Able to view all project
* Use for monitoring
* Apply role of a project to group
* Apply RBAC app, infra team based on project
* Include admin, edit and view access right of a project
### Keys Take Aways
* Use "oauth" resource to configure the identity provider integration
* General steps: Create "secret" > create "ConfigMap" > update "oauth" object
* Users are not actually created until the first time they try to log in
* Apply role (policy) to group, not to user
* Seperate access policy for pipeline, prod and non-prod environment
* Disable kubeadmin after LDAP integration
### Lab Dependency
NA
---
## OpenShift Monitoring with Prometheus
### Lab Overview
* Explore Alerting and Metrics in OpenShift web console
* Run Prometheus Queries
* Explore Grafana dashboard
### Keys Take Aways
* Prometheus alert configuration
* https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
* PromQL (Prometheuses Query Language) to access metric
* https://prometheus.io/docs/prometheus/latest/querying/basics/
* Grafana is Dashboard UI
### Lab Dependency
* NA
---
## Project Template, Quota, and Limits
### Lab Overview
* Define ResourceQuota and LimitRange for Project Request Template
* Apply the template in the ConfigMap in openshift-apiserver project
* Refer to "Cluster resource quotas" lab for user specific quota
### Keys Take Aways
* OpenShift create project based on default Project Request Template
* Templates create reusable sets of OpenShift objects with parameterization
* https://docs.openshift.com/container-platform/3.11/dev_guide/templates.html
* ResourceQuota set the limit of a project (sum of all resources)
* LimitRange set the limit of individual resources, i.e. Pod
* Create Project Request Template in openshift-config project; Apply the template in openshift-apiserver
### Lab Dependency
* NA
---
## OpenShift Networking and NetworkPolicy
### Lab Overview
* Create two projects, test connectivity between two projects
* Define NetworkPolicy to deny all access in project b
* Define NetworkPolicy to allow tcp 5000 with pod label run:ose
* use "oc get pods -o yaml" to get app label
### Keys Take Aways
* Use curl to test connectivity
* Deny all access set podSelector and ingress to empty i NetworkPolicy CR
* All NetworkPolicy CRs in a project are combined to create the allowed ingress access for the pods in the project
* Use "oc get networkpolicy -n netproj-b" to check the network policy
### Lab Dependency
* NA
---
## Disabling Project Self-Provisioning
### Lab Overview
* Remove self provisioning cluster role binding
* Customizing project request message
### Keys Take Aways
* "self-provisioners" is a default "clusterrolebinding" to allow authenticated user to create project
* Administrators may not want Projects getting created without their knowledge
* You can define self-provisioner role binding to specific user or group
### Lab Dependency
* LDAP lab
---
## Cluster Resource Quotas
### Lab Overview
* Set quota by user
* Set quota by label
### Keys Take Aways
* Two primary usecases to use clusterresourcequota instead of a project based quota
* Set quotas on a specific user
* Set a quota by application which span multiple projects
### Lab Dependency
* LDAP lab
---
## Cluster Metering
### Lab Overview
* Deploy Metering Operator from OperatorHub
* Install the Metering stack
* Writing Reports
* You have to wait all the pod ready before moving to next step. It may take times for the pod ready
* Wait until "Finished" display in RUNNING in "oc get reports -n openshift-metering" before download
### Keys Take Aways
* Operator Metering is a chargeback and reporting tool to provide accountability for how resources are used across a cluster
* Cluster admins can schedule reports based on historical usage data by Pod, Namespace, and Cluster wide
* Metering can create scheduled report or one-time report
### Lab Dependency
* NA
---
## Taints and Tolerations
### Lab Overview
* Tainting nodes and observe the application deployment behavior
* You may not be able to spread the pod evenly across workers, apply the same taint to multiple nodes
* Change the $TTNODE var to worker node with
* Apply toleration to deployment
### Keys Take Aways
* Taints repel pods; nodeSelector attracts pods
* Once a node is tainted, that node should not accept any workload that does not tolerate the taint(s)
* Three basic taints: NoSchedule, NoExecute, PreferNoSchedule
* Confusion: pods do **not match** the taint are **not scheduled** onto that node
* A toleration is a way for pods to "tolerate" (or "ignore") a node’s taint during scheduling
### Lab Dependency
* Infrastructure Nodes and Operators - require infra node