**Fleet Agent Initiated Installation** Firstly, let me provide some context. We need to explore an alternative way of installing fleet agents in downstream clusters. [This](https://fleet.rancher.io/cluster-registration#agent-initiated-registration) is documented in the fleet docs. **Motivation** This alternative approach allows us to bootstrap a cluster and initiate a connection from the bootstrapped cluster without requiring direct connectivity from the management cluster with CAPI generated kubeconfig. It also explores alternative communication pattern, where the git is the main source of truth and can access the bootstrapped cluster, but the management cluster can’t. **How it Works** We can achieve this by using Helm or Fleet, which requires Helm to install the agent inside the cluster. This approach requires some additions to the bootstrap procedure, specifically to `postKubeadmBootstrap` (or alternative for other Bootstrap providers) commands in the config definitions. To implement this, we can use a runtime extension hook for the cluster class (TopologyMutation) and append the desired command to the cluster before the bootstrap procedure. In there we only have (generally) available kubectl, there is no helm cli in the cluster. **Solution** I believe the best solution is to create a job with helm cli available (like an Alpine/Helm) that is applied with kubectl inside the `postKubeadmBootstrap` bootstrap command and executes the Helm based installation for the agent with all required values, like API server URL, API server certificate and ClusterRegistration token prepared. This will be the first step in initiating the connection. It would additionally require to create a secret with the API server certificate, and mount it into the job for helm command access. In this scenario, as the Fleet add-on provider executes the pre-bootstrap hook procedure and reconciles the cluster, it will prepare all required manifests, such as the cluster registration token, and provide values for the templated helm job installation step. This manifest can be installed with kubectl without issues. The agent installation command is relatively short and is roughly equivalent to: ```bash helm -n cattle-fleet-system install --create-namespace --wait $CLUSTER_LABELS --set-file apiServerCA="$API_SERVER_CA_FILE" --set apiServerURL="$API_SERVER_URL" --set token=$CLUSTER_REGISTRATION_TOKEN --set clientID=$CLIENT_ID fleet-agent fleet/fleet-agent ``` There also be required creation of RBAC definitions for the Job to succeed, such as SA, RoleBinding and Role for the cattle-fleet-system namespace. And as a first step, creation of the `cattle-fleet-system` namespace. **Airgapped Scenarios** For airgapped scenarios, we only need to preload the Alpine Helm image and Fleet images in the cluster. The fleet agent will install the agent, requiring certificates from the management cluster for reverse connectivity. These must be prepared somewhere, possibly as a secret content ready for use. **Security Considerations** We can potentially explore issuing TokenRequest in the management cluster and using it instead of API server CA. However, this is still untested, so we'll need to validate its supportability by Fleet. **Alternative Solution** Other possible approach to install the fleet agent is to prepare bootstrap manifests in the management cluster, similar to what will be required for API server connectivity - creating a secret with certificates in the management cluster. To do this, it's necessary to perform a dry run Helm template on the management cluster and store the output in a secret. This requires additional steps during the processing of templates for fleet providers and add-on providers. The benefit of this solution, which prepare the initial templates during the installation procedure on the management cluster, is ability to use plain kubectl apply to apply all the manifests in the bootstrapped chile cluster. There is no need for a Job or RBAC, no need for latter cleanup. The drawback of this alternative approach is that it doesn't handle upgrades. With Helm installation, even with this job, there's still essentially a managed installation, allowing for manual or automated upgrades using Helm using similar pattern or manually performed by user. In contrast, this approach only applies manifests without considering existing installations. This means it may block Helm from installing the same thing in the same location, which is undesirable. It is also prone to breakages, when the fleet would change templated manifests for the installation.