--- tags: Discussion --- # Workload Tenant Cluster Lifecycle ### Question : Airshipctl persisting data **Kubeconfig** for target cluster. Problem is especially vivid during public cloud deployments, capz, capg but in bmh deployments as well: - **BMH dployments**: We deploy target cluster via ephemeral cluster, where do we store the kubeconfig for the target cluster? You can get a secret from ephmeral cluster with target cluster kubeconfig, however once ephemeral cluster is gone, kubeconfig is gone with it? Is it a manual job to commit it into a document model, or should we require an external storage to save it, and teach airshipctl to use it if needed? In current scenario we predefine the certificates and keys, and we **DO** know the ip address of the server, but that requires that user **MUST** generate certs beforehand, that is a manual labor. - **Public clouds and CAPD**: Even when we can predefine the certs ands keys, we cant predict the IP address of the API server, which is part of kubeconfig, so we get it from parent ephemeral cluster (KIND), but once you tear it down, you don't have the IP, and airshipctl has no means to connect to it. Of course, we do have an ability, to supply our own kubeconfig via flag, and airshipctl will use it and will connect to it. But the kubeconfig is not persisted anywhere, should we simply rely on user and expect him to commit the kubeconfig to the manifest? should that be automated, or should this be a general bootstrap phase, which ends in kubeconfig being commited to persistent storage automatically that can be reused later? - :::info **PREREQUISITE**: * Remember that kubeconfig will be a single file at the site level, that will contain the information for all clusters within the site. Succh as , Ephemeral, Target and any Workload Clusters that are created . * Kubecofig will be located in the same directory as teh Site Phases entry point. Phase Plan imapcted, details for phase plan discussion here https://hackmd.io/PNo8Y_rYSPWrNmzGbsnOqw **LIFECYCLE** 1. `airshipctl phase run` ***workload_clusterA_control_phase*** Should be using the Site kubeconfig, which contains all the clusters within the site. ```yaml apiVersion: airshipit.org/v1alpha1 kind: Phase metadata: name: control clusterName: [THIS SHOULD BE THE TARGET/MANAGEMEMNT CLUSTER] config: executorRef: apiVersion: airshipit.org/v1alpha1 kind: KubernetesApply name: kubernetes-apply documentEntryPoint: ..../workload_clusterA/... ``` 2. airshipcctl phase run` ***workload_clusterA_workerscontrol_phase*** Ues a different Phase document. But still pointing to Targt/MAnagement cluster 3. ***airshipctl cluster import --kubeconfig [flags] ***<cluster name|workload cluster A> * This command will retrieve the secret for the cluster from the management/target cluster * Interact with the kubeconfig of the site * Will add the cluster / contexts /users to the site kubeconfig. 4. `airshipctl phase run` ***workload_clusterA__workloads*** ```yaml apiVersion: airshipit.org/v1alpha1 kind: Phase metadata: name: workloads clusterName: [THIS SHOULD BE THE workload_clusterA CLUSTER] config: executorRef: apiVersion: airshipit.org/v1alpha1 kind: KubernetesApply name: kubernetes-apply documentEntryPoint: ..../workload_clusterA/... ``` Because of the phase clusterName it will use the appropriate credentials, and endpoints etc.A fter this command the Tenant Cluster services managed by the platform will be deployed and configured. Such as the CNI (Calico), storage configurations and possible other services such as MetalLB. 5. At this point the Workload or Tenant cluster is ready to be handed over. This is a manual procedure at this point that takes care of : * Want to persist the updated kubeconfig for the site, someone need to do some git magic to update the repo. * Essentially pipelines will need to make sure they can persist this when appropriate. This implies possible persistet data beyond the life of the artifacts created by the pipeline. * Want to hand over the Workload Cluster specific kubeconfig. ::: ## What about CAPZ and CAPD If any CI/CD pipelines leverage the new command ***airshipctl cluster import ***mentioned abovce , the issue mentioned above will be managed.