Mario Vazquez Cebrian
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee
    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee
  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # Introduction to KCP KCP is a prototype of a multi-tentant Kubernetes control plane for workloads on many clusters. It provides a generic CustomResourceDefinition (CRD) apiserver that is divided into multiple _logical clusters_ that enable multitenancy of cluster-scoped resources such as CRDs and Namespaces. Each of these logical clusters is fully isolated from the others, allowing different teams, workloads, and use cases to live side by side. [Source](https://github.com/kcp-dev/kcp/blob/main/README.md) > Currently, the project is under heavy development, which means that every day it is evolving and things are changing rapidly. If you're reading this after May 2022, it's very likely that the content is outdated. ## Why this blog? During the last weeks our team at Red Hat has been playing around with KCP, we wanted to understand how KCP worked and what use cases it could solve. During our spike around the KCP technology we wanted to run a transparent multi-cluster demo, but that's something that was not ready at the time. Instead, we tried an approach where different teams inside the same organization would get their own workspaces and clusters connected to KCP, so they could deploy their applications transparently. ## Terminology The complete terminology can be found [here](https://github.com/kcp-dev/kcp/blob/main/docs/terminology.md) access it to check the most up-to-date definitions. |**Term**|**Description**|**Comparable in Kube**| |--------|---------------|----------------------| |Workspaces|Used to provide multi-tenancy, every Workspace will have its own api-resources and API endpoint. Some workspaces (Organizational) can contain other workspaces (Universal)|A cluster's API endpoint| |Workload Cluster|A "real Kubernetes cluster". One that can run Kubernetes workloads and accepts standard Kubernetes API objects.|A cluster| We can think of Workspaces as the way to provide isolation to different users inside a KCP cluster. One potential organization could be: - Organization 1 - Application 1 - Application 2 - Organization 2 - Application 3 In the above organization, we can have different teams taking care of different apps on the same KCP server, but they will be fully isolated from each other. ## Compute and Workspaces Demo The demo showcases how a KCP admin can create different workspaces and provide access to physical clusters to run workloads from KCP. We will have workspaces for two different teams, each team will have access to its own physical cluster under the hood, but both teams will consume KCP API to create their workloads. > **NOTE**: Demo working with commit `decced4`. ### Starting KCP In this first part of the demo we will see how we can start KCP and the different `Workspaces` that come preconfigured out of the box. 1. Clone the KCP repository and build KCP. ~~~sh git clone https://github.com/kcp-dev/kcp.git cd kcp/ git checkout decced4 make export PATH=${PATH}:${PWD}/bin ~~~ 2. Start KCP. > **NOTE**: You can use the `--bind-address` flag to force KCP to listen on a specific IP on your node. e.g: `kcp start --bind-address 10.19.3.4`. Otherwise, it will bind to all interfaces. ~~~sh kcp start ~~~ 3. Export the `kubeconfig` to connect to KCP as admin. ~~~sh export KUBECONFIG=.kcp/admin.kubeconfig ~~~ 4. Move to the root workspace. > **NOTE**: By default KCP will create two workspaces, `root` and inside `root` we will have `default`. ~~~sh kubectl ws use root ~~~ 5. We can list the workspaces inside `root`, we will see `default`. ~~~sh kubectl get workspaces ~~~ ~~~sh NAME TYPE PHASE URL default Organization Ready https://10.19.3.4:6443/clusters/root:default ~~~ 6. As we mentioned earlier, a workspace is like having your own K8s API server with its own API resources, etc. You can actually, query this API like you would do with a regular K8s API server. ~~~sh curl -k https://10.19.3.4:6443/clusters/root:default/ ~~~ > **NOTE**: Since we're not sending any bearer token/x509 cert with our request, the request won't be authenticated/authorized. ~~~json { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\": \"root:default\" workspace access not permitted", "reason": "Forbidden", "details": {}, "code": 403 } ~~~ ### Create our custom Organization This part of the demo will guide us through the creation of custom `Workspaces` for our organization and also for our teams. 1. Let's create the Organizational Workspace for out TelcOps organization. ~~~sh kubectl ws create telcops --type Organization ~~~ 2. It will show up as a new workspace. ~~~sh kubectl get workspaces ~~~ ~~~sh NAME TYPE PHASE URL default Organization Ready https://10.19.3.4:6443/clusters/root:default telcops Organization Ready https://10.19.3.4:6443/clusters/root:telcops ~~~ 3. We will create a universal workspace for the `team-a` inside the `telcops` organizational workspace. ~~~sh kubectl ws use telcops kubectl ws create team-a ~~~ 4. Again, we can list the workspaces. But this time we will only see the workspaces inside the `telcops` organizational workspace. ~~~sh kubectl get workspaces ~~~ ~~~sh NAME TYPE PHASE URL team-a Universal Ready https://10.19.3.4:6443/clusters/root:telcops:team-a ~~~ 5. Let's use this new workspace and list the API resources ~~~sh kubectl ws use team-a kubectl api-resources ~~~ ~~~sh NAME SHORTNAMES APIVERSION NAMESPACED KIND configmaps cm v1 true ConfigMap <output_omitted> workloadclusters workload.kcp.dev/v1alpha1 false WorkloadCluster ~~~ 6. KCP knows nothing about Deployments. ~~~sh kubectl api-resources | grep -i deployment kubectl get deployments ~~~ ~~~sh error: the server doesn't have a resource type "deployments" ~~~ ### Learning about new API resources Now that we have our `Workspaces` ready, we need KCP to learn about the API resources we will be using to deploy our workloads. This part of the demo will guide us through the process of importing API resources from real Kubernetes clusters. 1. Next, we need KCP to learn about new API resources we want to use in this workspace, like Deployments. The command below creates a `WorkloadCluster` object in our workspace and outputs a yaml file to deploy the Syncer. > **NOTE**: In order to learn this new types we're going to use something called Syncer. The Syncer is a component that can run on the KCP cluster and use a push pattern, or in the physical cluster (from where types will be learned) and use a pull pattern. In this case we're using pull, so the syncer will be running on the physical cluster. > **NOTE2**: The resources that will be synced by the Syncer are defined as parameters in the deployment (inside the yaml) using the `--resource` flag. > **NOTE3**: The command below uses a Syncer image built at the time of this writting, you may want to create your own by following the steps in [this guide](https://github.com/kcp-dev/kcp/blob/main/docs/syncer.md#building-the-syncer-image). ~~~sh kubectl kcp workload sync ocp-sno --syncer-image quay.io/mavazque/kcp-syncer:latest > syncer-ocp-sno.yaml ~~~ ~~~sh cat syncer-ocp-sno.yaml | grep '\--resource' ~~~ > **NOTE**: The syncer will take care of sync the following resources. ~~~sh - --resources=configmaps - --resources=deployments.apps - --resources=secrets - --resources=serviceaccounts ~~~ 2. Now it's time to get the syncer deployed in our physical cluster, an OpenShift SNO in this case. ~~~sh kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig apply -f syncer-ocp-sno.yaml ~~~ 3. Once syncer is started, we will have the deployment API resource in our KCP workspace ~~~sh kubectl api-resources | grep -i deployment ~~~ ~~~sh deployments deploy apps/v1 true Deployment ~~~ ### Deploying our Workloads At this point, our KCP `Workspace` knows about deployments, so we will go ahead and deploy our application. 1. Now that our KCP workspace knows about deployments we can go ahead and deploy our application, let's start by creating a new namespace. ~~~sh kubectl create namespace reverse-words ~~~ > **NOTE**: The namespace will get a `WorkloadCluster` assigned, if we had more than one the first one would be assigned. At this point only one `WorkloadCluster` can be assigned to the namespace, in the future more options will be available. ~~~sh kubectl get namespace reverse-words -o jsonpath='{.metadata.labels.workloads\.kcp\.dev/cluster}' ~~~ > We can see the `WorkloadCluster` we got assigned is `ocp-sno`. ~~~sh ocp-sno ~~~ 2. Let's create our application's deployment. ~~~sh kubectl -n reverse-words create deployment reversewords --image quay.io/mavazque/reversewords:latest ~~~ 3. If we try to get the pods, we will see that our KCP workspace doesn't know about them ~~~sh kubectl get pods ~~~ ~~~sh error: the server doesn't have a resource type "pods" ~~~ 4. But the syncer will push the status of our deployment in the `WorkloadCluster` to the KCP workspace. ~~~sh kubectl -n reverse-words describe deployment reversewords ~~~ ~~~yaml Name: reversewords Namespace: reverse-words CreationTimestamp: Thu, 05 May 2022 16:05:08 +0000 Labels: app=reversewords workloads.kcp.dev/cluster=ocp-sno Annotations: <none> Selector: app=reversewords Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: MinReadySeconds: 0 Pod Template: Labels: app=reversewords Containers: reversewords: Image: quay.io/mavazque/reversewords:latest Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable Events: <none> ~~~ 5. In the `WorkloadCluster` we will have our application running. ~~~sh kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig get pods -A -l app=reversewords ~~~ ~~~sh NAMESPACE NAME READY STATUS RESTARTS AGE kcp3afd937aa61b43734af460119cd405930ccbdeade88b713fc804a0d6 reversewords-6776c6fccc-kr6vw 1/1 Running 0 7m2s ~~~ ### Onboarding team-b This final part will show the required steps to onboard the `team-b` in their own `Workspace`. 1. Now that we have our application running in this workspace, let's create a new workspace for the `team-b` and do the same steps to get a new `WorkloadCluster` added to the workspace and the workload running. ~~~sh kubectl ws .. kubectl ws create team-b --enter ~~~ 2. Since this is a new workspace it has its own API resources, since we haven't added any additional API resource, it doesn't know anything about deployments: ~~~sh kubectl api-resources | grep -i deployment kubectl get deployments ~~~ ~~~sh error: the server doesn't have a resource type "deployments" ~~~ 3. Let's add the `WorkloadCluster` and get the deployments resource in the new workspace. ~~~sh kubectl kcp workload sync ocp-ztp --syncer-image quay.io/mavazque/kcp-syncer:latest > syncer-ocp-ztp.yaml ~~~ ~~~sh kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig apply -f syncer-ocp-ztp.yaml ~~~ ~~~sh kubectl api-resources | grep -i deployment ~~~ ~~~sh deployments deploy apps/v1 true Deployment ~~~ 4. Now that our KCP workspace knows about deployments we can go ahead and deploy our application, let's start by creating a new namespace. ~~~sh kubectl create namespace reverse-words ~~~ ~~~sh kubectl get namespace reverse-words -o jsonpath='{.metadata.labels.workloads\.kcp\.dev/cluster}' ~~~ > We can see the `WorkloadCluster` we got assigned is `ocp-sno`. ~~~sh ocp-ztp ~~~ 5. Let's create our application's deployment. ~~~sh kubectl -n reverse-words create deployment reversewords --image quay.io/mavazque/reversewords:latest ~~~ 6. In the `WorkloadCluster` we will have our application running. ~~~sh kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig get pods -A -l app=reversewords ~~~ ~~~sh NAMESPACE NAME READY STATUS RESTARTS AGE kcpeed022296263aa537060763c3934139fb84185e2a39a8dcf8695c89e reversewords-85d7b5b76c-mwtr8 1/1 Running 0 50s ~~~ ### Cleanup Once we're done with the demo, we can cleanup the different resources we created following the steps below. 1. Remove syncer from the `WorkloadClusters`. ~~~sh kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig delete -f syncer-ocp-sno.yaml kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig delete -f syncer-ocp-ztp.yaml ~~~ 2. Remove the objects created by the syncer. > **NOTE**: Objects created by the syncer won't be removed, this will change in the future and the user will likely be able to chose what happens with objects deployed on the `WorkloadClusters`. ~~~sh kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig get ns -o name | grep namespace/kcp.* | xargs kubectl --kubeconfig /root/ztp-sno-cluster/kubeconfig delete kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig get ns -o name | grep namespace/kcp.* | xargs kubectl --kubeconfig /root/ztp-virtual-cluster/kubeconfig delete ~~~ 3. Stop KCP and remove its storage. ~~~sh ctrl+c rm -rf .kcp/ ~~~ ## Next Steps KCP is under heavy development, and we want to keep an eye on the project for the next months. One of the next steps in our side will be testing the transparent multi-cluster with global ingress. New features will be landing soon in the upcoming releases, we will also try them and see what use cases we can come up with around those. ## Useful Resources - [KCP Architecture](https://github.com/kcp-dev/kcp/blob/c8d680a536543761d478b832fe779ee57e66ad4d/docs/architecture/README.md) - [KCP Terminology](https://github.com/kcp-dev/kcp/blob/main/docs/terminology.md) - [KCP Authorization](https://github.com/kcp-dev/kcp/blob/main/docs/authorization.md) - [How to build a custom syncer image](https://github.com/kcp-dev/kcp/blob/main/docs/syncer.md#building-the-syncer-image) - [Transparent Multi-Cluster with KCP (Investigation)](https://github.com/kcp-dev/kcp/blob/main/docs/investigations/transparent-multi-cluster.md) - [KCP YouTube Channel (Community Meetings)](https://www.youtube.com/channel/UCfP_yS5uYix0ppSbm2ltS5Q)

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully