# Setup RBAC in GKE Cluster
## Create an Oauth App
For this guide we will be using Google as Auth Provider. You can follow the steps outlined in the official Backstage.io docs
https://backstage.io/docs/auth/google/provider
> **NOTE:**
> When creating a cluster using testbed we are given an url to access TAP-GUI in the form:
> `tap-gui.tap.<CLUSTER_NAME>.tapdemo.vmware.com/`
We need both that url and the localhost Authorized redirect URIsurls setup in the **Authorized JavaScript origins** and in the **Authorized redirect URIs**
#### Example
Authorized JavaScript origins:
https://tap-gui.tap.mycluster.tapdemo.vmware.com
http://localhost:3000
Authorized redirect URIs:
https://tap-gui.tap.rbac.tapdemo.vmware.com/api/auth/google/handler/frame
http://localhost:7007/api/auth/google/handler/frame
## Enable Identity Service for GKE
This process usually takes several minutes:
```bash
gcloud container clusters update $CLUSTER_NAME --enable-identity-service --region $CLUSTER_REGION --project $CLUSTER_PROJECT
```
After it is done, several Kubernetes objects are created. We just need to focus on one.
## Configure Identity Service for GKE
### Get the default client config
```bash
kubectl get clientconfig default -n kube-public -o yaml > client-config.yaml
```
### Update the spec.authentication
By default the client-config.yaml doesn't have any authentication so we will need to add the section to the file.
```yaml=
apiVersion: authentication.gke.io/v2alpha1
kind: ClientConfig
metadata:
creationTimestamp: "2023-04-03T15:57:37Z"
generation: 1
name: default
namespace: kube-public
resourceVersion: "143607"
uid: SOME_UID
spec:
certificateAuthorityData: ...
internalServer: ""
name: CLUSTER_NAME
server: CLUSTER_URL
# Add authentication section below this line:
authentication:
- name: oidc
oidc:
clientID: CLIENT_ID
cloudConsoleRedirectURI: https://console.cloud.google.com/kubernetes/oidc
extraParams: prompt=consent,access_type=offline
issuerURI: https://accounts.google.com
kubectlRedirectURI: http://localhost:7007/api/auth/google/handler/frame
scopes: openid, email
userClaim: email
# end of authentication section
status: {}
```
It is important that the kubectlRedirectURI matches the redirect URI that was configured when creating the Oauth application on the first step.
### Update the client config
```bash
kubectl apply -f client-config.yaml
```
### Get the login file
Create a copy of the client-config.yaml created in the previous step
```bash
cp client-config.yaml login-config.yaml
```
Open the login-config.yaml and adjust the spec.authentication entry with the Client Secret from the oauth app:
```yaml=
...
spec:
certificateAuthorityData: ...
internalServer: ""
name: CLUSTER_NAME
server: CLUSTER_URL
authentication:
- name: oidc
oidc:
clientID: CLIENT_ID
# Add the CLIENT_SECRET here:
clientSecret: CLIENT_SECRET
cloudConsoleRedirectURI: https://console.cloud.google.com/kubernetes/oidc
extraParams: prompt=consent,access_type=offline
issuerURI: https://accounts.google.com
kubectlRedirectURI: http://localhost:7007/api/auth/google/handler/frame
scopes: openid, email
userClaim: email
...
```
This file can be used later by developers to login to the cluster
## Create an RBAC policy for your cluster
To grant access to resources in a particular namespace, we need to create a Role and a RoleBinding.
### Create a new developer namespace
```bash
kubectl create namespace my-apps-1
```
The previous command creates a new namespace called `my-apps-1`, however that namespace is still not ready to host workloads since it lacks the proper resources.
In order to provision those resources we must add the `apps.tanzu.vmware.com/tap-ns=""` label.
```bash
kubectl label namespaces my-apps-1 apps.tanzu.vmware.com/tap-ns=""
```
You can verify that the namespace now has the proper resources by running:
```bash
kubectl get secrets,serviceaccount,rolebinding,pods,workload,configmap -n my-apps-1
```
### Create a new Role
In this step we will create a Role (unlike ClusterRoles that grant access to a particular cluster, Roles grant access to resources on a particular namespace) that is scoped to the previously created namespace.
Create a my-apps-1-role.yaml and update it with the following content
```yaml=
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-apps-1
name: my-apps-1-role
rules:
- apiGroups: ['']
resources: ['pods', 'pods/log', 'services', 'configmaps', 'limitranges']
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['metrics.k8s.io']
resources: ['pods']
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['apps']
resources: ['deployments', 'replicasets', 'statefulsets', 'daemonsets']
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['autoscaling']
resources: ['horizontalpodautoscalers']
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['networking.k8s.io']
resources: ['ingresses']
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['networking.internal.knative.dev']
resources: ['serverlessservices']
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [ 'autoscaling.internal.knative.dev' ]
resources: [ 'podautoscalers' ]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['serving.knative.dev']
resources:
- configurations
- revisions
- routes
- services
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['carto.run']
resources:
- clusterconfigtemplates
- clusterdeliveries
- clusterdeploymenttemplates
- clusterimagetemplates
- clusterruntemplates
- clustersourcetemplates
- clustersupplychains
- clustertemplates
- deliverables
- runnables
- workloads
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['source.toolkit.fluxcd.io']
resources:
- gitrepositories
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['source.apps.tanzu.vmware.com']
resources:
- imagerepositories
- mavenartifacts
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['conventions.apps.tanzu.vmware.com']
resources:
- podintents
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['kpack.io']
resources:
- images
- builds
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['scanning.apps.tanzu.vmware.com']
resources:
- sourcescans
- imagescans
- scanpolicies
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['tekton.dev']
resources:
- taskruns
- pipelineruns
- pipelines
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ['kappctrl.k14s.io']
resources:
- apps
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [ 'batch' ]
resources: [ 'jobs', 'cronjobs' ]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
```
Create the role
```bash
kubectl apply -f my-apps-1-role.yaml
```
### Create a RoleBinding
A role binding grants the permissions defined in a role to a user or set of users.
Create a my-apps-1-rolebinding.yaml and update it with the following content
```yaml=
# Here we are granting the specific user USER_ID access the permissions
# defined in the my-apps-1-role which are limited to the my-apps-1 namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-apps-1-rolebinding
namespace: my-apps-1
subjects:
- kind: User
name: USER_ID
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: my-apps-1-role
apiGroup: rbac.authorization.k8s.io
```
Create the rolebinding
```bash
kubectl apply -f my-apps-1-rolebinding.yaml
```
> **Note:**
> USER_ID is the id returned by the oauth provider (usually an email but could be a username). For Google Auth is the email of the user.
## Verify on the CLI
To verify that the permissions are properly set we need to do the following:
**Install kubectl-oidc**
```bash
gcloud components install kubectl-oidc
```
**Login into the cluster**
```
kubectl oidc login --cluster=CLUSTER_NAME --login-config=login-config.yaml
```
After running the login command you will be taken to the browser to login using the oauth provider that was configured in the client-config.yaml.
If everything was setup correctly you should see a message in the browser saying something similar to:
> Authentication successful. Please close this window.
And in the console:
```bash
2023/04/03 14:48:44 Started webserver on localhost:7007.
2023/04/03 14:48:44 Attempting to open http://127.0.0.1:7007/login in default browser.
2023/04/03 14:48:49 OIDC Authentication successful.
```
Now if you try to view resources on namespaces other than my-apps-1 you will get an error message:
```bash
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "YOUR_USER_ID" cannot list resource "pods" in API group "" in the namespace "default"
```
However if you scope the query to the my-apps-1 namespace everything should work properly
```bash
$ kubectl get pods -n my-apps-1
No resources found in my-apps-1 namespace.
```
## Create a workload in the new developer namespace
tanzu apps workload create <WORKLOAD_NAME> --namespace my-apps-1 --git-branch main --git-repo https://github.com/jcospina/node-express --label apps.tanzu.vmware.com/has-tests=true --label app.kubernetes.io/part-of=node-express-accelerator --type web --yes
> **Note:**
> You can use whatever repo you like but feel free to use https://github.com/jcospina/node-express
After running the command you'll see an output such as
```bash
Create workload:
1 + |---
2 + |apiVersion: carto.run/v1alpha1
3 + |kind: Workload
4 + |metadata:
5 + | labels:
6 + | app.kubernetes.io/part-of: node-express-accelerator
7 + | apps.tanzu.vmware.com/has-tests: "true"
8 + | apps.tanzu.vmware.com/workload-type: web
9 + | name: node-workload
10 + | namespace: my-apps-1
11 + |spec:
12 + | source:
13 + | git:
14 + | ref:
15 + | branch: main
16 + | url: https://github.com/jcospina/node-express
Created workload "node-workload"
To see logs: "tanzu apps workload tail node-workload --namespace my-apps-1"
To get status: "tanzu apps workload get node-workload --namespace my-apps-1"
```
> **Note:**
> Verify that the `namespace` property matches the developer namespace created in the previous steps.
Now if you query for workloads inside the my-apps-1 namespace you should see the newly created workload
```bash
$ kubectl get workloads -n my-apps-1
NAME SOURCE SUPPLYCHAIN READY REASON AGE
node-workload https://github.com/jcospina/node-express source-test-scan-to-url False HealthyConditionRule 2m2s
```
> **Note:**
> By default the testbed cluster don't have a testing pipeline or scan policy configured so there will be errors on the workload.
## Create a workload in another namespace
When logging in to the cluster using `kubectl oidc` a new context is created in your kubeconfig. Since in this document the owner of the cluster is the same namespace scoped user, we can switch contexts back to the admin in order to do operations otherwise restricted.
List the current-contexts
```bash
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
<DEFAULT_CONTEXT> <CLUSTER_NAME> <AUTHINFO>
* rbac-rbac-anthos-default-user rbac rbac-anthos-default-user
```
You will notice that the new context will have a * under CURRENT and we want to go back to the default context.
```bash
$ kubectl config use-context <DEFAULT_CONTEXT>
Switched to context "<DEFAULT_CONTEXT>".
```
Now that we are in the default context and we can access all the cluster resources, we'll create a new workload in the my-apps namespace:
```bash
tanzu apps workload create <MY_APPS_WORKLOAD> --namespace my-apps --git-branch main --git-repo https://github.com/jcospina/node-express --label apps.tanzu.vmware.com/has-tests=true --label app.kubernetes.io/part-of=node-express-accelerator --type web --yes
```
If you now query the cluster for the list of workloads across all namespaces you should see two:
```bash
$ kubectl get workloads -A
NAMESPACE NAME SOURCE SUPPLYCHAIN READY REASON AGE
my-apps-1 node-workload https://github.com/jcospina/node-express source-test-scan-to-url False HealthyConditionRule 35m
my-apps my-apps-workload https://github.com/jcospina/node-express source-test-scan-to-url False HealthyConditionRule 47s
```
## Adding OIDC to TAP-GUI
So far we have configured the cluster to use OIDC as an authentication method and added a new role scoped to the my-apps-1 namespace. Now we will configure TAP GUI to connect to the cluster and use OIDC as well.
### Get the tap-values.yaml
Get the tap-values.yaml
```
kubectl get secret tap-tap-install-values -n tap-install -o json | jq -r '.data | to_entries[] | select(.key | test("^tap-values.*\\.yaml$")) | .value' | base64 -d > tap-values.yaml
```
> **Note:**
> For the previous command to work you need to have [jq](https://stedolan.github.io/jq/) installed.
The tap-values.yaml looks like this:
```yaml=
accelerator:
ingress:
include: true
domain: tap.rbac.tapdemo.vmware.com
server:
service_type: ClusterIP
buildservice:
kp_default_repository: tapacr.azurecr.io/rbac/build-service
kp_default_repository_secret:
name: registry-credentials
namespace: my-apps
ceip_policy_disclosed: true
contour:
envoy:
service:
type: LoadBalancer
hostPorts:
http: 0
https: 0
grype:
namespace: my-apps
targetImagePullSecret: registry-credentials
learningcenter:
ingressDomain: learning-center.tap.rbac.tapdemo.vmware.com
ootb_supply_chain_basic:
gitops:
ssh_secret: ""
registry:
repository: <CLUSTER_NAME>/apps-03-04-2023-14-57-55-511246857
server: tapacr.azurecr.io
ootb_supply_chain_testing:
gitops:
ssh_secret: ""
registry:
repository: <CLUSTER_NAME>/apps-03-04-2023-14-57-55-511246857
server: tapacr.azurecr.io
ootb_supply_chain_testing_scanning:
gitops:
ssh_secret: ""
registry:
repository: <CLUSTER_NAME>/apps-03-04-2023-14-57-55-511246857
server: tapacr.azurecr.io
profile: full
supply_chain: testing_scanning
cnrs:
ingress_issuer: ""
domain_name: tap.<CLUSTER_NAME>.tapdemo.vmware.com
tap_gui:
ingressEnabled: true
ingressDomain: tap.<CLUSTER_NAME>.tapdemo.vmware.com
service_type: ClusterIP
app_config:
proxy:
/metadata-store:
target: https://metadata-store-app.metadata-store:8443/api/v1
changeOrigin: true
secure: false
headers:
Authorization: Bearer ...
X-Custom-Source: project-star
source_controller:
ca-cert-data: null
metadata_store:
ns_for_export_app_cert: '*'
tap_telemetry:
installed_for_vmware_internal_use: "true"
appsso:
domain_name: tap.<CLUSTER_NAME>.tapdemo.vmware.com
```
### Update the tap-values.yaml
All updates will be done under the tap_gui section.
Add an auth section and a kubernetes section under app_config in tap-values.yaml
```yaml=
...
tap_gui:
ingressEnabled: true
ingressDomain: tap.<CLUSTER_NAME>.tapdemo.vmware.com
service_type: ClusterIP
app_config:
proxy:
/metadata-store:
target: https://metadata-store-app.metadata-store:8443/api/v1
changeOrigin: true
secure: false
headers:
Authorization: Bearer ...
X-Custom-Source: project-star
auth:
environment: development
providers:
google:
development:
clientId: CLIENT_ID
clientSecret: CLIENT_SECRET
kubernetes:
clusterLocatorMethods:
- type: config
clusters:
- name: DISPLAY_NAME
url: CLUSTER_URL
authProvider: google
skipTLSVerify: true
skipMetricsLookup: true
...
```
Where CLIENT_ID and CLIENT_SECRET are the values obtained from the oauth app, and CLUSTER_URL is the URL from the ``client-config.yaml file``.
Save and update the tap-values:
```bash
tanzu package installed update tap -p tap.tanzu.vmware.com -v $TAP_VERSION --values-file tap-values.yaml -n tap-install
```
Where $TAP_VERSION can be obtained after running the following:
```
TAP_VERSION=$(kubectl get pkgi tap -n tap-install -o jsonpath="{.status.version}")
```