# RudrX RudrX is a micro-app engine core for k8s based on OAM. ## Why RudrX? 1. Pluggable: Once RudrX installed, it can extend features quickly by install `OAM workloads/traits`. 2. User Experience: It will give the best end user experience as an open source PaaS based on K8s. 3. OAM best practice: It will be best practice for OAM platform builder and understand how can OAM work better. ## Features of RudrX ### Initialization ``` $ rudr init ``` 1. check K8s Cluster avaliable. (kubeconfig) 2. check oam-k8s-runtime is avaliable in cluster, or install it by helm. 3. create workloadDefinition/traitDefinition for K8s built-in resource(Deployment,StatefulSet, etc) ### Run workloads, One Component in AppConfig #### **End User Experience** ``` $ rudr run deployment frontend -p 80 oam-dev/demo:v1 ``` Explanation for this command: * Create a **Component** named _frontend_ by **K8s Deployment** type (alias can be used to make it a simpler name) * Create a single component **AppConfig**, with no **Traits** * the first argument `frontend` is always name for Component/AppConfig, the last argument is specified by template. In our example, it's image(oamdev/demo:v1) . Arguments in the middle like `-p 80` are templated fields 1. templated fields are implemented in a pre-defined `Template` object. Run from source code could be implemented. ``` $ rudr run deployment frontend -p 80 https://github.com/oam-dev/app.git ``` If this parameter is `image` type, a url should be compiled to image. #### **Platform Builder Experience** Platform builders will use extended definition object in OAM to define which arguments are supported in the user command above (no need to introduce another template object). The object will be like below: ```yaml apiVersion: core.oam.dev/v1alpha2 kind: WorkloadDefinition metadata: name: deployments.apps annotations: short: deploy spec: definitionRef: name: deployments.apps # extended fields template: object: apiVersion: apps/v1 kind: Deployment spec: containers: - image: myrepo/myapp:v1 name: master ports: - containerPort: 6379 protocol: TCP parameters: - name: image # --image short: i # -i required: true type: image fieldPaths: - "spec.containers[0].image" - name: port # --port short: p # -p required: false type: int fieldPaths: - "spec.containers[0].ports[0].containerPort" lastCommandParam: image # so image will become the default command ``` > ### Multiple container support? > No, one container for one component at first. Others will be sidecar. We can support in the future if there's real user case. So after the: ``` $ rudr run containerized frontend -p 80 oam-dev/demo:v1 ``` RudrX will generate a component like below: ```yaml apiVersion: core.oam.dev/v1alpha2 kind: Component metadata: name: comp-1 spec: workload: apiVersion: core.oam.dev/v1alpha2 kind: containerizedworkload spec: containers: - image: oam-dev/demo:v1 # inject by cli name: master ports: - containerPort: 80 # inject by cli protocol: TCP ``` ### Apply traits #### **End User Experience** 1. List traits supported ``` $ rudr traits NAME DEFINITION APPLIES TO scale hpa.autoscaling.k8s.io containerized,apps.k8s.io route route.core.oam.dev containerized,apps.k8s.io tls tls.core.oam.dev containerized ``` List traits supported for one kind of Workload ``` $ rudr traits --applies-to apps.k8s.io // type NAME DEFINITION scale hpa.autoscaling.k8s.io route route.core.oam.dev ``` 2. Attach a trait to component Alternatives: ``` # option 1 [more complex, but good experience] $ rudr scale frontend -max=10 # option 2 [simple, do it first!] $ rudr bind frontend scale -max=10 $ rudr unbind frontend scale ``` Example: ``` $ rudr bind frontend route -d myapps.alibaba-inc.com --hostname app ``` Explanation: * Apply a _MapRoute_ **Trait** to **Component** named _frontend_ * `-d` is short for `--domain` and `--hostname`, they are both template fields. ``` $ rudr bind frontend scaler --max=5 ``` Explanation: * Apply a _Scale_ **Trait** to **Component** named _frontend_ ``` $ rudr bind frontend scaler 5 ``` * `--max` can be default template field, so we can just write number `5` at the end. 3. List traits of one Component ``` $ rudr traits frontend NAME DEFINITION scale hpa.autoscaling.k8s.io route route.core.oam.dev ``` ### Apply Compose file directly #### **End User Experience** 1. Advanced user can apply Compose file directly ``` $ rudr apply -f appconfig-tls.yaml $ rudr traits frontend NAME DEFINITION APPLIES TO scale hpa.autoscaling.k8s.io containerized,apps.k8s.io route route.core.oam.dev containerized,apps.k8s.io tls tls.core.oam.dev containerized,apps.k8s.io ``` // compose file is somthing like this, need to design. ![](https://i.imgur.com/hCd0VmW.png) #### **Platform Builder Implementation** As all objects are installed in K8s cluster, so we can also support apply yaml file directly. ### discovery/install from definition file ```yaml apiVersion: core.oam.dev/v1alpha2 kind: WorkloadDefinition metadata: name: deployments.apps annotations: short: deploy spec: definitionRef: name: deployments.apps # extended fields template: ... install: helm: repo: https://oam-dev.github.io/crossplane-oam-sample/archives name: crossplane # yaml: # - https://url.to.yaml/crd.yaml # - https://url.to.yaml/deploy.yaml ``` ### Discovery Remote Workloads/Traits #### **End User Experience** 1. Add registry OAM workload/trait(including template) registry The registry can be helm charts. ``` $ rudr registry add core-repo https://hub.oam.dev $ rudr registry remove core-repo ``` 2. Discovery Workload and Trait ``` $ rudr registry traits REPO NAME DEFINITION APPLIES TO core-repo scale hpa.autoscaling.k8s.io containerized,apps.k8s.io core-repo ingress ingress.networking.k8s.io containerized,apps.k8s.io $ rudr registry workloads REPO NAME DEFINITION core-repo deployment deployments.apps.k8s.io core-repo statefulset statefulsets.apps.k8s.io core-repo cloneset cloneset.apps.alibaba.com ``` 3. install workload and trait ``` $ rudr install cloneset $ rudr install cloneset --repo core-repo $ rudr install ingress ``` 3. uninstall workload and trait ``` $ rudr uninstall cloneset ``` ### Multiple components per app ```console $ rudr create myapp Creating new app context... done, ⬢ myapp SUCCESS! $ rudr run deployment --name frontend -p 80 resouer/demo:v1 === App context: myapp frontend SUCCESS! $ rudr bind frontend route --host frontend.alibaba-inc.com === App context: myapp route --- frontend SUCCESS! $ rudr run statefulset --name backend -p 80 resouer/demo:v1 === App context: myapp backend SUCCESS! $ rudr deploy myapp === App context: myapp myapp SUCCESS! $ rudr su testapp Changing to new app context... done, ⬢ testapp === App context: testapp $ rudr context Current app context: testapp ``` ## Alternative ### Option 1 #### **User Experience** 1. Create AppConfig template named myapp. ``` $ rudr app myapp ``` == mkdir myapp == cd myapp rudr app bapp 2. Add Component to AppConfig template ``` $ rudr add myapp containerized frontend -p 80 oam-dev/demo:v1 ``` 3. Add a second Component to AppConfig template, it depends on frontend ``` $ rudr add myapp stateful backend -p 6556 redis/redis:v1 -c myapp --needs frontend ``` 4. Run AppConfig from AppConfig template ``` $ rudr run myapp ``` We can also run AppConfig from git repo containing AppConfig template ``` $ rudr run https://myapp.git ``` 5. List Current Running Applications ``` $ rudr apps ``` ## Issues Traits Registy: https://github.com/oam-dev/spec/issues/371 CRD discovery: https://github.com/oam-dev/spec/issues/370 ## Heroku style All above is Docker style, another choice is heroku style like below: ```console # List supported workload types $ rudr workloads NAME DEFINITION deployment apps.k8s.io # Provision a component by given workload type $ rudr run deployment --name frontend -p 80 oam-dev/demo:v1 # List supported traits $ rudr traits NAME DEFINITION APPLIES TO scale hpa.autoscaling.k8s.io containerized,apps.k8s.io route route.core.oam.dev containerized,apps.k8s.io tls tls.core.oam.dev containerized # List traits by workload types $ rudr traits --applies-to apps.k8s.io NAME DEFINITION autoscaler hpa.autoscaling.k8s.io route route.core.oam.dev # Bind trait to given component $ rudr traits:bind frontend route --host frontend.alibaba-inc.com $ rudr traits:bind frontend autoscaler --min=1 --max=5 # List traits bound to given component $ rudr traits --bound-to frontend NAME DEFINITION autoscaler hpa.autoscaling.k8s.io route route.core.oam.dev # Install an new workload type $ rudr addons:workloads:install -f ./cloneset.defn.yaml # List supported workload types $ rudr workloads NAME DEFINITION deployment apps.k8s.io cloneset apps.kruise.io # Provision a component by new workload $ rudr run cloneset --name backend --rollout-strategy=in-place -p 80 oam-dev/demo:v1 # Install a trait $ rudr addons:traits:install cronhpa --conflict-with autoscaler -f ./cronhpa.defn.yaml # List supported traits $ rudr traits NAME DEFINITION APPLIES TO scale hpa.autoscaling.k8s.io containerized,apps.k8s.io route route.core.oam.dev containerized,apps.k8s.io tls tls.core.oam.dev containerized cronhpa cronhpa.alibaba-inc.com containerized,apps.k8s.io # Add a registry $ rudr addons:registries:add --url https://github.com/alibaba/catalog --name alibaba # List availabe addons from registry $ rudr addons:workloads -r alibaba NAME DEFINITION unifieddeployment apps.kruise.io # Install workload type from registry $ rudr addons:workloads:install unifieddeployment -r alibaba ```