# **Submariner Testing**
## TL;DR **Summary of the deployment steps**
- Prepare ACM
- Provisioning two OpenShift Clusters in RHACM
- Create Cluster sets in RHACM
- Deploy Submariner add-on
- Verify Submariner connection
- Deploy a workload on multiple OpenShift Clusters for testing
## **1. Submariner Installation**
To deploy a Cluster Set and leverage it to deploy Submariner. Cluster Sets allow the grouping of cluster resources, which enables role-based access control management across all of the resources in the group. Submariner is an open-source tool that can be used to provide direct networking between two or more Kubernetes clusters in a given ManagedClusterSet, either on-premises or in the cloud.
Do the following steps in RHACM:
* Navigate to Infrastructure > Clusters > Cluster Sets. Click Create Cluster Set
* Enter a name for your cluster set. Click Create
* Click on ‘ Manage Resource Assignments’ to assign clusters
* Select the newly created clusters. Pick at least two clusters (cluster1 and cluster2 created earlier). Click Review. Click Save.

* Click on the Submariner add-ons tab and select ‘Install Submariner add-ons’
* Select cluster1 and cluster2 as the Target. Click Next
* Leave default options for cluster 1 and cluster2 Click Next. Click Install

Verify the "submariner-operator" in cluster 1 and cluster 2


!!!You can troubleshoot in the Hub OpernShift cluster -> log in to the HUB OCP cluster and look for the submariner-addon-xxxxxxxx-xxxx pod and tail the log in the open-cluster-management namespace.

## **2. Submariner Verification**
Install "subctl"
```
[lab-user@bastion subscription]$ curl -Ls https://get.submariner.io | bash
Installing subctl version latest
OS detected: linux
Architecture detected: amd64
Download URL: https://github.com/submariner-io/releases/releases/download/v0.10.1/subctl-v0.10.1-linux-amd64.tar.xz
Downloading...
subctl-v0.10.1-linux-amd64 has been installed as /home/lab-user/.local/bin/subctl
This provides subctl version: v0.10.1
[lab-user@bastion subscription]$ export PATH=$PATH:~/.local/bin
[lab-user@bastion subscription]$ echo export PATH=\$PATH:~/.local/bin >> ~/.profile
```
Verify connection between cluster1 and cluster2. The command is executed in cluster1.
```
[lab-user@bastion subscription]$ subctl show all
Cluster "api-cluster1-sandbox294-opentlc-com:6443"
✓ Showing Connections
GATEWAY CLUSTER REMOTE IP NAT CABLE DRIVER SUBNETS STATUS RTT avg.
ip-10-0-44-187 cluster2 35.85.220.235 yes libreswan 172.31.0.0/16, 10.129.0.0/16 connected 20.642205ms
✓ Showing Endpoints
CLUSTER ID ENDPOINT IP PUBLIC IP CABLE DRIVER TYPE
cluster1 10.0.84.81 13.57.241.20 libreswan local
cluster2 10.0.44.187 35.85.220.235 libreswan remote
✓ Showing Gateways
NODE HA STATUS SUMMARY
ip-10-0-84-81 active All connections (1) are established
Discovered network details via Submariner:
Network plugin: OpenShiftSDN
Service CIDRs: [172.128.0.0/16]
Cluster CIDRs: [10.128.0.0/16]
✓ Showing Network details
⠈⠁ Showing versions COMPONENT REPOSITORY VERSION
submariner registry.redhat.io/rhacm2-tech-preview v0.9
submariner-operator registry.redhat.io/rhacm2-tech-preview d8fc118b04aaaa3
service-discovery registry.redhat.io/rhacm2-tech-preview v0.9
✓ Showing versions
Cluster "api-cluster1-sandbox294-opentlc-com:6443"
✓ Showing Connections
GATEWAY CLUSTER REMOTE IP NAT CABLE DRIVER SUBNETS STATUS RTT avg.
ip-10-0-44-187 cluster2 35.85.220.235 yes libreswan 172.31.0.0/16, 10.129.0.0/16 connected 20.642205ms
✓ Showing Endpoints
CLUSTER ID ENDPOINT IP PUBLIC IP CABLE DRIVER TYPE
cluster1 10.0.84.81 13.57.241.20 libreswan local
cluster2 10.0.44.187 35.85.220.235 libreswan remote
✓ Showing Gateways
NODE HA STATUS SUMMARY
ip-10-0-84-81 active All connections (1) are established
Discovered network details via Submariner:
Network plugin: OpenShiftSDN
Service CIDRs: [172.128.0.0/16]
Cluster CIDRs: [10.128.0.0/16]
✓ Showing Network details
⠈⠁ Showing versions COMPONENT REPOSITORY VERSION
submariner registry.redhat.io/rhacm2-tech-preview v0.9
submariner-operator registry.redhat.io/rhacm2-tech-preview d8fc118b04aaaa3
service-discovery registry.redhat.io/rhacm2-tech-preview v0.9
✓ Showing versions
Cluster "cluster-8rfq6"
⚠ Submariner is not installed
Cluster "api-cluster-8rfq6-8rfq6-sandbox294-opentlc-com:6443"
⚠ Submariner is not installed
```
Show network from cluster2. You should see similar output when compare to that from cluster1. It shows that the connection is joined and could communicate. !!!The Service and Cluster CIDRs must not overlap in cluster 1 and cluster2!!! Please make sure you configure different CIDRs during OCP deployment.
```
[lab-user@bastion ~]$ subctl show networks
Cluster "api-cluster2-sandbox294-opentlc-com:6443"
Discovered network details via Submariner:
Network plugin: OpenShiftSDN
Service CIDRs: [172.31.0.0/16]
Cluster CIDRs: [10.129.0.0/16]
✓ Showing Network details
Cluster "cluster-8rfq6"
Discovered network details
Network plugin: OpenShiftSDN
Service CIDRs: [172.30.0.0/16]
Cluster CIDRs: [10.128.0.0/14]
✓ Showing Network details
Cluster "api-cluster-8rfq6-8rfq6-sandbox294-opentlc-com:6443"
Discovered network details
Network plugin: OpenShiftSDN
Service CIDRs: [172.30.0.0/16]
Cluster CIDRs: [10.128.0.0/14]
✓ Showing Network details
Cluster "api-cluster1-sandbox294-opentlc-com:6443"
Discovered network details via Submariner:
Network plugin: OpenShiftSDN
Service CIDRs: [172.128.0.0/16]
Cluster CIDRs: [10.128.0.0/16]
✓ Showing Network details
Cluster "api-cluster1-sandbox294-opentlc-com:6443"
Discovered network details via Submariner:
Network plugin: OpenShiftSDN
Service CIDRs: [172.128.0.0/16]
Cluster CIDRs: [10.128.0.0/16]
✓ Showing Network details
```
Use "subctl show connections" to check the Gateway IP an dsubnets information.
```
[lab-user@bastion subscription]$ subctl show connections
Cluster "api-cluster-8rfq6-8rfq6-sandbox294-opentlc-com:6443"
⚠ Submariner is not installed
Cluster "api-cluster1-sandbox294-opentlc-com:6443"
✓ Showing Connections
GATEWAY CLUSTER REMOTE IP NAT CABLE DRIVER SUBNETS STATUS RTT avg.
ip-10-0-44-187 cluster2 35.85.220.235 yes libreswan 172.31.0.0/16, 10.129.0.0/16 connected 20.772435ms
Cluster "api-cluster1-sandbox294-opentlc-com:6443"
✓ Showing Connections
GATEWAY CLUSTER REMOTE IP NAT CABLE DRIVER SUBNETS STATUS RTT avg.
ip-10-0-44-187 cluster2 35.85.220.235 yes libreswan 172.31.0.0/16, 10.129.0.0/16 connected 20.772435ms
Cluster "api-cluster2-sandbox294-opentlc-com:6443"
✓ Showing Connections
GATEWAY CLUSTER REMOTE IP NAT CABLE DRIVER SUBNETS STATUS RTT avg.
ip-10-0-84-81 cluster1 13.57.241.20 yes libreswan 172.128.0.0/16, 10.128.0.0/16 connected 20.819299ms
Cluster "cluster-8rfq6"
⚠ Submariner is not installed
```
## **3. Deploy an application on two OpenShift clusters**
Deploy "Guestbook" [application](https://github.com/skeeey/acm-demo-app#troubleshooting) following this [guide](https://cloud.redhat.com/blog/connecting-managed-clusters-with-submariner-in-red-hat-advanced-cluster-management-for-kubernetes)

!!!Make sure to replace the host of route in the guestbook/route.yaml with your own cluster name and your own OCP domain.
```
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: guestbook
namespace: guestbook
spec:
host: guestbook.apps.<your-ocp-cluster-name>.<your-ocp-cluster-domain>
port:
targetPort: 80
to:
kind: Service
name: guestbook
weight: 100
```
Complete the following steps to deploy your application:
1. Log into to Red Hat Advanced Cluster Management hub cluster console.
1. From the navigation menu, navigate to Manage applications.
1. On the Applications page, click Create application.
1. Enter the application name and namespace.
1. Select Git repository.
1. Enter the application Git URL. For this example, it is https://github.com/skeeey/acm-demo-app.
1. Select the main branch and the guestbook path.
1. Enter the managed cluster cluster1 label name=cluster1. This selects the managed cluster cluster1 to deploy to the application frontend.
1. Repeat steps 2-6.
1. Select the main branch and the redis-leader path.
1. Enter the managed cluster cluster1 label name=cluster1. This selects the managed cluster cluster1 to deploy to the application redis-leader service.
1. Repeat steps 2-6.
1. Select the main branch and the redis-follower path.
1. Enter the managed cluster cluster2 label name=cluster2. This selects the managed cluster cluster2 to deploy to application the redis-follower service.
Apply "scc" to fix the guestbook permission problem
```
cat << EOF | oc apply -f -
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: guestbook-demo-scc
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities:
- '*'
allowedUnsafeSysctls:
- '*'
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups:
- system:cluster-admins
- system:nodes
- system:masters
priority: 1
readOnlyRootFilesystem: false
requiredDropCapabilities: []
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
seccompProfiles:
- '*'
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:guestbook:default
volumes:
- '*'
EOF
```
"Service" in guestbook project in cluster1

"Service" in guestbook project in cluster2

## **4. What happen in the Submariner? Let's check how pods communicate using ServiceExport**
Investigate the status of our ServiceExport. The following output is from redis-follower in cluster2. Check for the message "message: Service was successfully synced to the broker". This show the serviceexport is working fine.
```
[lab-user@bastion subscription]$ oc get serviceexport -o yaml
apiVersion: v1
items:
- apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
annotations:
apps.open-cluster-management.io/hosting-deployable: ggithubcom-frankingwh-acm-demo-app-ns/ggithubcom-frankingwh-acm-demo-app-ServiceExport-redis-follower
apps.open-cluster-management.io/hosting-subscription: guestbook/redis-follower-subscription-1
apps.open-cluster-management.io/reconcile-option: merge
apps.open-cluster-management.io/sync-source: subgbk8s-guestbook/redis-follower-subscription-1
creationTimestamp: "2021-10-26T07:09:11Z"
generation: 1
labels:
app.kubernetes.io/part-of: redis-follower
name: redis-follower
namespace: guestbook
ownerReferences:
- apiVersion: v1
kind: Subscription
name: redis-follower-subscription-1
uid: b9810eda-fbf0-4e90-a7b7-6ec15061d29c
resourceVersion: "195979"
uid: bf58439a-6941-43a2-a10a-11cb36c36f87
status:
conditions:
- lastTransitionTime: "2021-10-26T07:09:11Z"
message: Awaiting sync of the ServiceImport to the broker
reason: AwaitingSync
status: "False"
type: Valid
- lastTransitionTime: "2021-10-26T07:09:11Z"
message: Service was successfully synced to the broker
reason: ""
status: "True"
type: Valid
kind: List
metadata:
resourceVersion: ""
selfLink: ""
```
When we generate a "ServiceExport", we create an endpoint which look like:
<app name>.<namespace>.svc.clusterset.local:<service port>.
In this case, the "app name" is redis-follower, namespace is guestbook and the service port is 6379. So we can launch a container in cluster 2 to test the connection in local network (10.129.0.0/16). (you cannot test it in bastion host as it is on a different network)
```
[lab-user@bastion subscription]$ oc -n default run submariner-test --rm -ti --image quay.io/submariner/nettest -- /bin/bash
If you don't see a command prompt, try pressing enter.
bash-5.0# curl redis-follower.guestbook.svc.clusterset.local:6379
curl: (1) Received HTTP/0.9 when not allowed
bash-5.0#
bash-5.0# curl redis-leader.guestbook.svc.clusterset.local:6379
curl: (1) Received HTTP/0.9 when not allowed
bash-5.0#
```
Go to the Guestbook URL and test it :)

**Reference:**
* https://cloud.redhat.com/blog/connecting-managed-clusters-with-submariner-in-red-hat-advanced-cluster-management-for-kubernetes
* https://rcarrata.github.io/openshift/rhacm-submariner-2/
* https://github.com/skeeey/acm-demo-app