Try   HackMD

ClusterAPI - Running e2e tests & controllers locally with Intellij

This document contains some additional notes for the corresponding Zoom session (link Passcode: N20+nkQ%).

Overall we will:

  • Run the CAPD-based quickstart e2e test via Intellij
  • Run the capi-controller via Intellij and proxy webhook requests from the kind cluster to the locally running controller

Note:

  • There is already documentation about how to run e2e tests via an IDE in the CAPI book. The current doc is a concrete walkthrough and additionally contains instructions on how to run a controller locally.

Prerequisites

Running the quickstart e2e test locally

(The following is extracted from $CAPI_HOME/scripts/ci-e2e.sh)

Building the images:

cd $CAPI_HOME
export REGISTRY=gcr.io/k8s-staging-cluster-api
export PULL_POLICY=IfNotPresent

make docker-build
make -C test/infrastructure/docker docker-build

Note: This is only required if local versions of the controllers should be used, per default the daily images published under the main tag are used.

Generating cluster-templates:

make -C test/e2e cluster-templates

Run the e2e test via Debug in Intellij and place a breakpoint here

Note: The following Run/Debug configuration is used:

<component name="ProjectRunConfigurationManager">
  <configuration default="false" name="capi e2e: quickstart" type="GoTestRunConfiguration" factoryName="Go Test" folderName="test/e2e">
    <module name="cluster-api" />
    <working_directory value="$PROJECT_DIR$/test/e2e" />
    <parameters value="-e2e.config=$PROJECT_DIR$/test/e2e/config/docker.yaml -ginkgo.focus=&quot;\[PR-Blocking\]&quot; -ginkgo.v=true" />
    <envs>
      <env name="ARTIFACTS" value="$PROJECT_DIR$/_artifacts" />
    </envs>
    <kind value="PACKAGE" />
    <package value="sigs.k8s.io/cluster-api/test/e2e" />
    <directory value="$PROJECT_DIR$" />
    <filePath value="$PROJECT_DIR$" />
    <framework value="gotest" />
    <pattern value="^\QTestE2E\E$" />
    <method v="2" />
  </configuration>
</component>

Note: To run another test the ginkgo.focus parameter can be adjusted.

We now have a Management and a Workload cluster running.

Running the capi-controller locally

Now we will use Telepresence to run the capi-controller locally. Telepresence will be used to proxy the webhook trafic to our local controller (roughly like this).

Import the kubeconfig of the kind Management cluster

kind get kubeconfig --name=test-q5tlzb | k8s-ctx-import

Deploy telepresence:

telepresence connect

Disable the currently deployed capi-controller and its probes:

controller=capi
kubectl -n ${controller}-system patch deployment ${controller}-controller-manager --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/readinessProbe"},{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"},{"op": "replace", "value": "k8s.gcr.io/pause:3.5", "path": "/spec/template/spec/containers/0/image"},{"op": "replace", "value": ["/pause"], "path": "/spec/template/spec/containers/0/command"}]'

Intercept webhook traffic via telepresence

telepresence intercept -n ${controller}-system ${controller}-controller-manager --port 9443

Get the webhook certificates

mkdir -p /tmp/webhook-cert
kubectl -n ${controller}-system get secret ${controller}-webhook-service-cert -o json | jq '.data."tls.crt"' -r | base64 -d > /tmp/webhook-cert/tls.crt
kubectl -n ${controller}-system get secret ${controller}-webhook-service-cert -o json | jq '.data."tls.key"' -r | base64 -d > /tmp/webhook-cert/tls.key

Start the controller via Intellij

Note: The following Run/Debug configuration is used:

<component name="ProjectRunConfigurationManager">
  <configuration default="false" name="tele: capi controller /tmp/webhook-cert" type="GoApplicationRunConfiguration" factoryName="Go Application" folderName="CAPI">
    <module name="cluster-api" />
    <working_directory value="$PROJECT_DIR$/" />
    <parameters value="--webhook-cert-dir=/tmp/webhook-cert --feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=true" />
    <envs>
      <env name="CAPI_MAC_FIX_REST_CONFIG" value="true" />
    </envs>
    <kind value="PACKAGE" />
    <package value="sigs.k8s.io/cluster-api" />
    <directory value="$PROJECT_DIR$" />
    <filePath value="$PROJECT_DIR$/main.go" />
    <method v="2" />
  </configuration>
</component>

Now you can play around by setting breakpoints in the controller and modifying the CAPI resources!

Running the capi-controller locally (Part II - MacOS hack)

At least on MacOS the capi-controller will frequently log errors because because it cannot reach the apiservers of the workload clusters. That's more or less the same issue as documented here just this time inside the CAPI controller (tl;dr the workload kubeconfig is not valid when using it locally).

To workaround this, the Run/Debug configuration above sets the CAPI_MAC_FIX_REST_CONFIG env var and the following patch must be applied:

Index: controllers/remote/cluster.go
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/controllers/remote/cluster.go b/controllers/remote/cluster.go
--- a/controllers/remote/cluster.go	(revision HEAD)
+++ b/controllers/remote/cluster.go	(revision Staged)
@@ -18,6 +18,9 @@controllers/remote/cluster.go
 
 import (
 	"context"
+	"fmt"
+	"os"
+	"strings"
 	"time"
 
 	"github.com/pkg/errors"
@@ -59,8 +62,33 @@
 		return nil, errors.Wrapf(err, "failed to create REST configuration for Cluster %s/%s", cluster.Namespace, cluster.Name)
 	}
 
+	if os.Getenv("CAPI_MAC_FIX_REST_CONFIG") != "" {
+		lbContainerName := cluster.Name + "-lb"
+		port, err := findLoadBalancerPort(ctx, lbContainerName)
+		if err != nil {
+			return nil, errors.Wrapf(err, "failed to get lb port")
+		}
+		restConfig.Host = fmt.Sprintf("https://127.0.0.1:%s", port)
+		restConfig.Insecure = true
+		restConfig.CAData = nil
+	}
+
 	restConfig.UserAgent = DefaultClusterAPIUserAgent(sourceName)
 	restConfig.Timeout = defaultClientTimeout
 
 	return restConfig, nil
 }
+
+func findLoadBalancerPort(ctx context.Context, lbContainerName string) (string, error) {
+	portFormat := `{{index (index (index .NetworkSettings.Ports "6443/tcp") 0) "HostPort"}}`
+	getPathCmd := NewCommand(
+		WithCommand("docker"),
+		WithArgs("inspect", lbContainerName, "--format", portFormat),
+	)
+	stdout, _, err := getPathCmd.Run(ctx)
+	if err != nil {
+		return "", err
+	}
+
+	return strings.TrimSpace(string(stdout)), nil
+}
Index: controllers/remote/command.go
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/controllers/remote/command.go b/controllers/remote/command.go
new file mode 100644
--- /dev/null	(revision Staged)
+++ b/controllers/remote/command.go	(revision Staged)
@@ -0,0 +1,101 @@
+/*
+Copyright 2019 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Package remote implements command execution functionality.
+package remote
+
+import (
+	"context"
+	"io"
+	"os/exec"
+
+	"github.com/pkg/errors"
+)
+
+// Command wraps exec.Command with specific functionality.
+// This differentiates itself from the standard library by always collecting stdout and stderr.
+// Command improves the UX of exec.Command for our specific use case.
+type Command struct {
+	Cmd   string
+	Args  []string
+	Stdin io.Reader
+}
+
+// Option is a functional option type that modifies a Command.
+type Option func(*Command)
+
+// NewCommand returns a configured Command.
+func NewCommand(opts ...Option) *Command {
+	cmd := &Command{
+		Stdin: nil,
+	}
+	for _, option := range opts {
+		option(cmd)
+	}
+	return cmd
+}
+
+// WithStdin sets up the command to read from this io.Reader.
+func WithStdin(stdin io.Reader) Option {
+	return func(cmd *Command) {
+		cmd.Stdin = stdin
+	}
+}
+
+// WithCommand defines the command to run such as `kubectl` or `kind`.
+func WithCommand(command string) Option {
+	return func(cmd *Command) {
+		cmd.Cmd = command
+	}
+}
+
+// WithArgs sets the arguments for the command such as `get pods -n kube-system` to the command `kubectl`.
+func WithArgs(args ...string) Option {
+	return func(cmd *Command) {
+		cmd.Args = args
+	}
+}
+
+// Run executes the command and returns stdout, stderr and the error if there is any.
+func (c *Command) Run(ctx context.Context) ([]byte, []byte, error) {
+	cmd := exec.CommandContext(ctx, c.Cmd, c.Args...) //nolint:gosec
+	if c.Stdin != nil {
+		cmd.Stdin = c.Stdin
+	}
+	stdout, err := cmd.StdoutPipe()
+	if err != nil {
+		return nil, nil, errors.WithStack(err)
+	}
+	stderr, err := cmd.StderrPipe()
+	if err != nil {
+		return nil, nil, errors.WithStack(err)
+	}
+	if err := cmd.Start(); err != nil {
+		return nil, nil, errors.WithStack(err)
+	}
+	output, err := io.ReadAll(stdout)
+	if err != nil {
+		return nil, nil, errors.WithStack(err)
+	}
+	errout, err := io.ReadAll(stderr)
+	if err != nil {
+		return nil, nil, errors.WithStack(err)
+	}
+	if err := cmd.Wait(); err != nil {
+		return output, errout, errors.WithStack(err)
+	}
+	return output, errout, nil
+}