# Red Hat Certified Specialist in OpenShift 4.2 Administration Exam Prep (ex280)
###### tags: RHCA
* [Product Documentation for Red Hat OpenShift Local 2.16 | Red Hat Customer Portal](https://access.redhat.com/documentation/en-us/red_hat_openshift_local/2.16#minimum-system-requirements_gsg)
* [Create an OpenShift cluster | Red Hat OpenShift Cluster Manager](https://console.redhat.com/openshift/create/local)
## Installation
### Installation and Upgrade
Download `crc`:
```bash!
$ wget https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz
$ tar xvf crc-linux-amd64.tar.xz
$ mkdir -p ~/bin
$ cp crc-linux-*-amd64/crc ~/bin
$ export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
```
New Instance:
```bash!
$ crc version
$ crc setup
$ crc config set consent-telemetry yes
$ crc config set enable-cluster-monitoring true
$ crc config set memory 14336
$ crc start
$ crc stop
```
Upgrade:
```bash!
$ crc delete
$ crc version
$ crc setup
$ crc start
```
### Accessing OpenShift
```bash!
$ crc oc-env
$ crc console --credentials
$ oc login -u developer -p developer https://api.crc.testing:6443
$ oc login -u kubeadmin -p t5dII-TyH8T-yzZab-ZDEnr https://api.crc.testing:6443
$ oc get nodes
$ oc get co
$ crc console
```
## Manage OpenShift Container Platform
### Understand and Use the Command Line and Web Console
```bash!
$ oc help
```
### Create and Delete Projects
```bash!
$ oc projects
$ oc new-project demotmp --description="Throw away project for demo." --display-name="Garbage"
$ oc new-project demo --description="Demo project for our lesson." --display-name="Demo Project"
$ oc project demotmp
$ oc new-app https://github.com/openshift/ruby-hello-world.git
$ oc delete project demotmp
$ oc get pod
$ oc get all
```
### Import, Export, and Configure Kubernetes Resources
```bash!
$ oc get pod -o yaml > ./resources/pod.yaml
$ oc replace -f resources/pod.yaml
$ oc get secrets
$ oc extract secret/builder-token-26dhg --to=resources/
```
### Examine Resources and Cluster Status
```bash!
$ oc adm top node
$ oc adm top node crc-8tnb7-master-0
```
### View Logs
```bash!
$ oc logs -f pod/ruby-hello-world-54776cb746-fj54x --tail=5
$ oc logs bc/ruby-hello-world
$ oc logs --version=1 bc/ruby-hello-world
```
### Monitor Cluster Events and Alerts
```bash!
$ oc logs -f pod/ruby-hello-world-54776cb746-fj54x --tail=5
$ oc logs bc/ruby-hello-world
$ oc logs --version=1 bc/ruby-hello-world
```
```bash!
$ oc get events -n demo
$ oc get events -n openshift-config
```
## Manage Users and Policies
```bash!
$ wget https://raw.githubusercontent.com/linuxacademy/content-openshift-2020/master/htpasswd_cr.yaml
```
### Configure the HTPasswd Identity Provider for Authentication
```bash!
$ htpasswd -c -B admin.htpasswd admin
$ oc create secret generic htpasswd-admin --from-file=htpasswd=admin.htpasswd -n openshift-config
$ vim htpassed.yaml
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: developer
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret
- name: admin
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpasswd-admin
$ oc apply -f htpassed.yaml
$ oc login -u admin -p admin https://api.crc.testing:6443
$ oc whoami
```
### Create and Delete Users
Create:
```bash!
$ oc get secrets -n openshift-config
$ oc extract secret/htpass-secret --to - -n openshift-config > users.htpasswd
$ for users in andy eric levi; do htpasswd -B -b users.htpasswd $users dontabuse; done
$ oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run -o yaml -n openshift-config | oc replace -f -
```
Delete:
```bash!
$ vim users.htpasswd
# Delete the row of "andy"
$ oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run -o yaml -n openshift-config | oc replace -f -
$ oc get identities
$ oc delete identity.user.openshift.io htpass-secret:andy
$ oc get users
$ oc delete users.user.openshift.io andy
```
### Modify User Passwords
```bash!
$ oc get secrets -n openshift-config
$ oc extract secret/htpass-secret --to - -n openshift-config > users.htpasswd
$ htpasswd -B users.htpasswd andy
$ htpasswd -B users.htpasswd eric
$ htpasswd -B users.htpasswd levi
$ oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run -o yaml -n openshift-config | oc replace -f -
```
### Modify User and Group Permissions
```bash!
$ oc new-project demo
$ oc new-app https://github.com/openshift/ruby-hello-world.git
$ oc adm policy add-role-to-user view andy -n demo
$ oc adm policy add-role-to-user edit eric -n demo
$ oc adm policy add-role-to-user admin levi -n demo
$ oc describe rolebinding -n demo
$ oc adm policy add-cluster-role-to-user cluster-admin admin
# Remove the default kubeadmin user
$ oc delete secrets kubeadmin -n kube-system
```
### Create and Manage Groups
```bash!
$ oc new-project demo2
$ oc adm groups new dev
$ oc adm groups add-users dev levi eric andy emily
$ oc adm policy add-role-to-group edit dev -n demo2
```
## Control Access to Resources
### Define Role-Based Access Controls
```bash!
$ oc new-project demo3
$ oc create role podview --verb=get --resource=pod -n demo3
$ oc adm policy add-role-to-user podview andy --role-namespace=demo3 -n demo3
$ oc describe rolebinding -n demo3
$ oc create clusterrole podviewonly --verb=get --resource=pod
$ oc adm policy add-cluster-role-to-user podviewonly emily
$ oc describe clusterrolebindings podviewonly
```
### Apply Permissions to Users
```bash!
$ oc adm policy who-can get pod
$ oc adm policy who-can create pod
```
### Create and Apply Secrets to Manage Sensitive Information
```bash!
$ echo -n 'andy' | base64
$ echo -n 'dontabuse' | base64
$ vim secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
namespace: demo3
type: Opaque
data:
username: YW5keQ==
password: ZG9udGFidXNl
$ oc create -f secret.yaml
$ oc get secrets -n demo3
$ oc describe secrets/test-secret -n demo3
```
### Create Service Accounts and Apply Permissions Using Security Context Constraints
```bash!
$ oc create sa demosa
$ oc describe sa/demosa
$ vim scc-admin.yaml
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: scc-admin
allowPrivilegedContainer: true
fsGroup:
type: RunAsAny
readOnlyRootFilesystem: false
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
supplementalGroups:
type: RunAsAny
users:
- demosa
requiredDropCapabilities:
- KILL
- MKNOD
- SYS_CHROOT
$ oc create -f scc-admin.yaml
$ oc describe scc/scc-admin
```
## Configure Networking Components
:::danger
* Solve the DNS issue with CRC:
[Wildcard DNS resolution for apps-crc.testing does not appear to be working · Issue #2889 · crc-org/crc](https://github.com/crc-org/crc/issues/2889#issuecomment-1228176286)
:::
### Troubleshoot Software Defined Networking
```bash!
$ oc get -n openshift-network-operator deployment/network-operator
$ oc get clusteroperators.config.openshift.io network
$ oc describe network/cluster
$ oc logs -n openshift-network-operator deployments/network-operator --tail 10
$ oc get -n openshift-dns-operator deployment/dns-operator
$ oc get clusteroperators.config.openshift.io dns
$ oc describe clusteroperators.config.openshift.io dns
$ oc logs -n openshift-dns-operator deployments/dns-operator
$ oc get ep -n demo
$ oc get pods -n demo -o wide
```
### Create and Edit External Routes
```bash!
$ oc expose service ruby-hello-world -l name=hello --name=helloworld
$ oc annotate route helloworld --overwrite haproxy.router.openshift.io/timeout=5s
$ oc annotate route helloworld router.openshift.io/helloworld="-helloworld_annotation"
$ oc annotate route helloworld haproxy.router.openshift.io/rate-limit-connection=true
$ oc describe route
```
### Control Cluster Network Ingress
```bash!
$ oc project demo3
$ oc new-app django-psql-example
$ oc get services -n demo3
$ oc get routes.route.openshift.io -n demo3
$ vim demo3-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo3-ingress
namespace: demo3
spec:
rules:
- host: django-psql-example-demo3.apps-crc.testing
http:
paths:
- path: /demo
pathType: Prefix
backend:
service:
name: postgresql
port:
number: 5432
$ oc apply -f demo3-internal.yaml -n demo3
$ oc get ingress -n demo3
```
### Create a Self Signed Certificate
```bash!
$ openssl req -x509 -newkey rsa:4096 -nodes -keyout helloworld.key -out helloworld.crt
# Common Name should be exactly the name your are going to expose
# Example: helloworld-demo.apps-crc.testing
$ openssl rsa -in cert.withpass.key -out cert.withoutpass.kry
```
:::info
* How to veiw the information in the certificate?
```bash!
$ openssl x509 -in <cert.crt> -text -noout
```
:::
### Secure Routes Using TLS Certificates
```bash!
$ oc project demo
$ oc get route
$ oc delete route helloworld
$ oc get servive
$ oc create route edge helloworld --service=ruby-hello-world --cert=helloworld.crt --key=helloworld.key --hostname=helloworld-demo.apps-crc.testing
```
## Configure Pod Scheduling
```bash!
$ wget https://raw.githubusercontent.com/linuxacademy/content-openshift-2020/master/quota.yaml
$ wget https://raw.githubusercontent.com/linuxacademy/content-openshift-2020/master/resource_limits.yaml
```
### Limit Resource Usage
```bash!
$ vim compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4"
limits.cpu: "4"
limits.memory: "2Gi"
limits.ephemeral-storage: "4Gi"
configmaps: "7"
secrets: "10"
services: "8"
$ oc create -f compute-resources.yaml -n demo
$ oc get resourcequotas -n demo
$ oc describe resourcequotas compute-resources -n demo
```
### Scale Applications to Meet Increased Demand
```bash!
$ oc scale --replicas 3 dc/django-psql-example
$ oc get dc
$ oc scale --replicas 2 deployment/ruby-hello-world
$ oc get deployment
$ oc autoscale dc/django-psql-example --min 1 --max 4 --cpu-percent 90
$ oc autoscale deployment/ruby-hello-world --min 1 --max 5 --cpu-percent=75
$ oc get hpa
```
### Control Pod Placement Across Cluster Nodes
```bash!
$ vim policy.cfg
{
"kind" : " Policy",
"apiVersion" : "v1",
"predicates" : [
{"name" : "PodFitsHostPorts"},
{"name" : "PodFitsResources"},
{"name" : "NoDiskConflict"},
{"name" : "NoVolumeZoneConflict"},
{"name" : "MatchNodeSelector"},
{"name" : "MaxEBSVolumeCount"},
{"name" : "MaxAzureDiskVolumeCount"},
{"name" : "checkServiceAffinity"},
{"name" : "PodToleratesNodeNoExecuteTaints"},
{"name" : "MaxGCEPDVolumeCount"},
{"name" : "MatchInterPodAffinity"},
{"name" : "PodToleratesNodeTaints"},
{"name" : "HostName"}
],
"priorities" : [
{"name" : "LeastRequestedPriority", "weight" : 1} ,
{"name" : "BalancedResourceAllocation", "weight" : 1} ,
{"name" : "ServiceSpreadingPriority", "weight" : 1} ,
{"name" : "EqualPriority", "weight" : 1}
]
}
$ oc create configmap -n openshift-config --from-file=policy.cfg pod-scheduler-policy
$ oc get configmap pod-scheduler-policy -n openshift-config
$ oc get schedulers.config.openshift.io cluster -o yaml
$ oc patch schedulers.config.openshift.io cluster --type='merge' -p '{"spec":{"policy":{"name":"pod-scheduler-policy"}}}'
$ oc get schedulers.config.openshift.io cluster -o yaml
$ oc get pod -n openshift-kube-scheduler
$ oc logs openshift-kube-scheduler-crc-8tnb7-master-0 -n openshift-kube-scheduler
$ oc logs openshift-kube-scheduler-crc-8tnb7-master-0 -n openshift-kube-scheduler | grep predicates
```
## Configure Cluster Scaling
### Manually Control the Number of Cluster Workers
```bash!
$ oc get machinesets -n openshift-machine-api
$ oc scale --replicas 2 machineset crc-8tnb7-worker-0 -n openshift-machine-api
$ oc edit machinesets.machine.openshift.io crc-8tnb7-worker-0 -n openshift-machine-api
$ oc get nodes
```
### Automatically Scale the Number of Cluster Workers
```bash!
$ oc get machinesets.machine.openshift.io -n openshift-machine-api
$ vim MachineAutoscaler.yaml
apiVersion: "autoscaling.openshift.io/v1beta1"
kind: "MachineAutoscaler"
metadata:
name: "crc-8tnb7-worker-0"
namespace: "openshift-machine-api"
spec:
minReplicas: 1
maxReplicas: 4
scaleTargetRef:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
name: crc-8tnb7-worker-0
algorithms:
- name: balanced
rules:
- utilization:
cpu: 70
minReplicas: 1
maxReplicas: 4
$ oc create -f MachineAutoscaler.yaml
$ oc get machineautoscalers.autoscaling.openshift.io -n openshift-machine-api
$ oc get machineautoscalers.autoscaling.openshift.io -n openshift-machine-api -o yaml
$ oc edit machineautoscalers.autoscaling.openshift.io -n openshift-machine-api -o yaml
$ oc describe machineautoscalers.autoscaling.openshift.io -n openshift-machine-api
```