---
tags: Discussion
---
# KubeConfig
[TOC]
## Kubeconfig difference use or lifecycle
- The ability to ingest dynamically-generated kubeconfigs into
the document set will make bare metal and public cloud kubeconfig
representations look more similar
- Any provider-specific actions, like extracting the kubeconfig, can be formed into generic container executor phases
- Same goes for any "custom waiting" that needs to loop
- verify_hwcc_profiles.sh -- testing-only script
- Path forward: form into a generic executor container
-
## GET Credentials Approach :
* cloud Providers provide a get credentials, we have airshipctl cluster get kubeconfig <cluster> *
Convenient way to get kubeconfig ... however is retrieving from live cluster
Should or can we integrate withh the public cloud notion ... Can we use this ?
After we create theh clod provider cluster, whiuch stoes kbeconfig in te management cluster
We can use the cloud provider store <data> into the provider..
...
The we can rely on the cloud persistece infrastcture to have the kubeconfig structure...
Issue is the persistence on the filesystem .., ..
* How would the lifecycle work ?
* Phase that creates a cluster
* Post Phase that can *capture the kubeconfig* from that cluster
* The implementation of this post phase is TBD
* Can be a container with some scripted/coded logic to do what is needed
* Can be a new feature where Phases can be defined as a ***Post Executor Action*** or Task.
:::warning
* Phase to Store/Persist the kubeconfig
* Cloud leverage the Provided Persistence Storage
* For BM what does this mean?
* Store in the filesystem!!!
* Store in external system , such as HashiCorp/Git etc?? - does airship facilitate this?
:::
:::danger
Lifecycle of the ephemeral cluster whene dealing with Cloud Providers introduces a challenge
for Plan Run . We will loose the dynamic definition of the target/mgmt cluster after ephemeral is gone.
..
```golang
// Cluster uniquely identifies a cluster and its parent cluster
type Cluster struct {
// Parent is a key in ClusterMap.Map that identifies the name of the parent(management) cluster
Parent string `json:"parent,omitempty"`
// DynamicKubeConfig kubeconfig allows to get kubeconfig from parent cluster, instead
// expecting it to be in document bundle. Parent kubeconfig will be used to get kubeconfig
DynamicKubeConfig bool `json:"dynamicKubeConf,omitempty"`
// KubeconfigContext is the context in kubeconfig, default is equals to clusterMap key
KubeconfigContext string `json:"kubeconfigContext,omitempty"`
// ClusterAPIRef references to Cluster API cluster resources
ClusterAPIRef ClusterAPIRef `json:"clusterAPIRef,omitempty"`
```
Other issue is the inability of having a generic mechanism to have the cloud provider ephemeral endpoint.
We might have to persist the cluster secret beyond the ephemeral/kind cluster.
Possible Paths :
* Exclude Ephemeral steps from Plan, implies some explicit pahse by phase, and secret/kubeconfig extraction
* Explicit Phase for Ephemeral secret extraction .. that somehow can consume explicit inputs.
:::
## KNOWN ISSUES or DIVERGENCES between Cloud and BM
### kubeconfig management:
* We are programatically specifying the kubeconfig for baremetal vs cloud providers where the kubeconfig is generated?
* Discussion was to try to merge the kubeconfig to merge into a single entity.
* Airshipctl will not be responsible for persisting beyond the filesystem
* Airshipctl will just make sure that the kubeconfig in manifests or explicitly stated outside will is updated.
## Separate Phases or Test
hwcc : is not needed for cloud providers other than metal 3 since its tied to the BMO/Ironic mechanisms.