# kctrl dev fails on GKE
###### tags: `kctrl`
## Error:
```
roaggarwal@roaggarwal-a01 cli % ./kctrl dev -f package-resources.yml -l
Target cluster 'https://35.184.95.247' (nodes: gke-ra-cluster-dev-test-default-pool-621dc761-3bvt, 2+)
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
annotations:
kctrl.carvel.dev/local-fetch-0: .
creationTimestamp: null
name: psql.azure.references.services.apps.tanzu.vmware.com
namespace: default
spec:
packageRef:
refName: psql.azure.references.services.apps.tanzu.vmware.com
versionSelection:
constraints: 0.0.0
serviceAccountName: development
status:
conditions: null
friendlyDescription: ""
observedGeneration: 0
Reconciling in-memory app/psql.azure.references.services.apps.tanzu.vmware.com (namespace: default) ...
==> Executing /usr/local/bin/vendir [vendir sync -f - --lock-file /dev/null]
==> Finished executing /usr/local/bin/vendir
==> Executing /usr/local/bin/ytt [ytt -f /var/folders/z1/czwbshk524j7t12s3tqhc_100000gn/T/kapp-controller-fetch-template-deploy1808020159/0/gke-kapp.yml]
==> Finished executing /usr/local/bin/ytt
==> Executing /usr/local/bin/kbld [kbld -f - --build=false]
==> Finished executing /usr/local/bin/kbld
==> Executing /usr/local/bin/kapp [kapp deploy --prev-app psql.azure.references.services.apps.tanzu.vmware.com-ctrl -f - --app-changes-max-to-keep=5 --app psql.azure.references.services.apps.tanzu.vmware.com.app --kubeconfig=/dev/null --yes]
==> Finished executing /usr/local/bin/kapp
4:40:33PM: Fetch started (2m ago)
4:40:36PM: Fetching (2m ago)
| apiVersion: vendir.k14s.io/v1alpha1
| directories:
| - contents:
| - directory: {}
| path: .
| path: "0"
| kind: LockConfig
|
4:40:36PM: Fetch succeeded (2m ago)
4:40:37PM: Template succeeded (2m ago)
4:40:37PM: Deploy started (2m ago)
4:42:39PM: Deploy failed
| kapp: Error: yaml: line 5: mapping values are not allowed in this context
| Deploying: Error (see .status.usefulErrorMessage for details)
App tailing error: Reconciling app: Deploy failed
Succeeded
```
Same works on Azure/Minikube.
Even with kapp-controller installed on GKE, it works.
```
roaggarwal@roaggarwal-a01 cli % kctrl pkg install -i gke-issue -p psql.azure.references.services.apps.tanzu.vmware.com -v 1.0.0 -n kctrl-test
Target cluster 'https://35.184.95.247' (nodes: gke-ra-cluster-dev-test-default-pool-621dc761-3bvt, 2+)
3:26:13PM: Creating service account 'gke-issue-kctrl-test-sa'
3:26:13PM: Creating cluster admin role 'gke-issue-kctrl-test-cluster-role'
3:26:13PM: Creating cluster role binding 'gke-issue-kctrl-test-cluster-rolebinding'
3:26:14PM: Creating package install resource
3:26:14PM: Waiting for PackageInstall reconciliation for 'gke-issue'
3:26:14PM: Fetch started (1s ago)
3:26:15PM: Fetching
| apiVersion: vendir.k14s.io/v1alpha1
| directories:
| - contents:
| - imgpkgBundle:
| image: index.docker.io/rohitagg2020/gke-issue@sha256:cc9352d99816e785ac19c4633d7e34eb33d08323018798a487d0999eb20edc0d
| path: .
| path: "0"
| kind: LockConfig
|
3:26:15PM: Fetch succeeded
3:26:15PM: Template succeeded
3:26:15PM: Deploy started
3:26:16PM: Deploying
| Target cluster 'https://10.23.96.1:443' (nodes: gke-ra-cluster-dev-test-default-pool-621dc761-3bvt, 2+)
| Changes
| Namespace Name Kind Age Op Op st. Wait to Rs Ri
| (cluster) psql-000 Namespace - create - reconcile - -
| Op: 1 create, 0 delete, 0 update, 0 noop, 0 exists
| Wait to: 1 reconcile, 0 delete, 0 noop
| 9:56:15AM: ---- applying 1 changes [0/1 done] ----
| 9:56:15AM: create namespace/psql-000 (v1) cluster
| 9:56:15AM: ---- waiting on 1 changes [0/1 done] ----
| 9:56:15AM: ok: reconcile namespace/psql-000 (v1) cluster
| 9:56:15AM: ---- applying complete [1/1 done] ----
| 9:56:15AM: ---- waiting complete [1/1 done] ----
| Succeeded
3:26:16PM: Deploy succeeded
Succeeded
```
## Issue:
When running kctrl dev, the cluster URL we recieve is without port .e.g `https://35.184.95.247`. After parsing, we get empty port. And in kapp, we tries to concatenate it [here](https://github.com/vmware-tanzu/carvel-kapp/blob/51d6bbdc2d8bfe7a65bad261de668113f7832d08/pkg/kapp/cmd/core/config_factory.go#L116). This leads to resultant string something like this: `https://35.184.95.247:`. Hence, kapp is not able to connect to cluster and results in error.
## Solution 1:
Make change in the `kapp` so that in case `Port` is not provided, we just rely on the `Host`.
## Solution 2:
Make change in the `hackyConfigureKubernetesDst` function of `kctrl dev` so that in case `Port` is not provided, we just rely on the `Host`.
## Solution 3:
If port is not present, we default to `443`(secured port) or `80`(unsecured port).