# Adding HTTP Scaling & Ingress Functionality to KEDA >This is a proposal to add HTTP functionality to KEDA (it is currently linked from [this GitHub issue](https://github.com/kedacore/keda/issues/538)). Currently, KEDA includes functionality to scale `Deployment`s and `Job`s according to many different scalers, however, an HTTP based scaler is not currently supported. We have built a [complete prototype](https://github.com/osscda/kedahttp) built on top of KEDA core (`kedacore/keda`) that hosts and scales a user-specified `Deployment` according to incoming HTTP traffic. While not implemented in full, this [architecture diagram](https://user-images.githubusercontent.com/70865/100118098-5be26580-2e2a-11eb-9265-bbf7eaa9fefa.png) overviews the overall system. ## Design This system, which we tentatively call `kedahttp`, contains the following components: - An **[external scaler](https://keda.sh/docs/2.0/scalers/external/)**: this component measures the number of HTTP requests in the queue and reports it when KEDA makes a request to the gRPC interface - An **interceptor**: this component acts as a reverse proxy for HTTP requests. It directs traffic to the appropriate backend `Deployment` (via a `ClusterIP` `Service`), and maintains the queue metrics that the aforementioned external scaler reports - An **operator**: this component watches a new CRD, `KedaHTTPApp`, and manages appropriate underlying Kubernetes resources -- `ScaledObject`, `Deployment`, etc... -- for a given application deployed to the cluster - A **command line interface** (CLI): this tool may be used by the app developer to create with a command a new `KedaHTTPApp` in the cluster, customized to their needs - A **simple control plane**: a REST API to expose a [PaaS](https://en.wikipedia.org/wiki/Platform_as_a_service)-like interface to developers who can't or don't want to interact directly with the Kubernetes cluster API ## Proposal Details We have purposely designed a loosely coupled component architecture because we believe that it should ship with "sensible defaults" while allowing most or all of it to be customized. Also, we believe that part of this architecture is out of scope of KEDA core (`kedacore/keda`) and should be put into a new project under the `kedacore` GitHub organization. Here are the components that we believe should go into KEDA core: ### External Scaler For those unfamiliar, the `Scaler` API in the KEDA core codebase is as follows: ```go= type Scaler interface { // The scaler returns the metric values for a metric Name and criteria matching the selector GetMetrics(ctx context.Context, metricName string, metricSelector labels.Selector) ([]external_metrics.ExternalMetricValue, error) // Returns the metrics based on which this scaler determines that the ScaleTarget scales. This is used to construct the HPA spec that is created for // this scaled object. The labels used should match the selectors used in GetMetrics GetMetricSpecForScaling() []v2beta2.MetricSpec IsActive(ctx context.Context) (bool, error) // Close any resources that need disposing when scaler is no longer used or destroyed Close() error } ``` This API is generic enough to support an HTTP-based scaler. The metrics can represent the queue length of HTTP servers. The challenge then becomes getting that metric without bundling any ingress controller or other HTTP server with KEDA _core_. We believe that the source for these metrics should be external, and this should be a special case of an `External` scaler. This new HTTP scaler can use a similar (or identical) [gRPC interface](https://keda.sh/docs/2.0/concepts/external-scalers/#external-scaler-grpc-interface) to the current scaler, and we will provide a compatible gRPC server in a separate repository. We will also ship instructions on how to build your own scaler. ### Interceptor We believe that this component should live inside a new project inside the `kedacore` like `keda/kedahttp`, so that it can be officially supported by the project and included in documentation. As described above, KEDA core will not require that the interceptor be used -- anything that provides the external scaler with metrics is suitable -- but it will be a high quality implementation established and tested for production use. The current interceptor implementation is written in Go and uses the [`NewSingleHostReverseProxy`](https://pkg.go.dev/net/http/httputil#NewSingleHostReverseProxy) library. It contains only the features necessary to update the request queue size and forward requests to the proper backend. We intend that the interceptor be used in conjunction with a standard ingress controller. The interceptor holds requests in a queue when the app is scaling from zero and forwards them when one or more of the app's pods are ready to serve. This feature is useful in cases where a request needs to be held in the queue while a scale event is in progress, and is an established pattern in both [Knative](https://knative.dev) and [Osiris](https://github.com/deislabs/osiris) It's also the metrics source for the external scaler, and sends metrics to the scaler via asynchronous messaging. This pattern allows us to run the scaler in a single process (with possibly another as a hot standby), while a possibly large number of interceptors can be running to handle incoming traffic. Additionally, interceptor metrics sent to the external scaler can be used to scale the interceptor itself to handle traffic volume changes. ### Operator Aside from the proxy, another fundamental feature of this system is to create, manage, and delete underlying Kubernetes resources according to a high-level abstraction. We believe that the operator pattern fits this use case well. A new operator -- tentatively called kedahttp-controller -- listens for events on a CustomResourceDefinition (CRD) tentatively called `KedaHTTPApp`. When such a CRD is created, the Kubernetes resources required for the new app -- `Deployment`, `Service`, `Ingress`, `ScaledObject` and more, depending on details o the app -- are created. For the lifetime of that specific CRD, those resources are maintained in the system. When the same CRD is deleted, those resources are deleted. Like the proxy, the operator is independent of KEDA and should be in the same `kedacore/kedahttp` repository. ### CLI While anybody could simply write the YAML for a `KedaHTTPApp` and `kubectl create` it, we provide a CLI to reduce that work to a single command, as below. Similar commands exist to modify or remove the CRD for a given app. ```shell kedahttp create --name myapp --image arschles/myapp --port 8080 ``` While the CLI is independent of the operator and proxy, we believe that it makes sense to live in the same repository as they do. ### Control Plane The control plane exists for operational simplicity at scale and is orthogonal to the system. The aforementioned CLI requires access to the standard Kubernetes API. In many Kubernetes deployments, the API is not exposed publicly - in fact, it's considered a best practice in many cases to secure your cluster's API. The control plane exists as a bastion server specifically for this system. It implements an authenticated REST API that directly creates, modifies, and deletes `KedaHTTPApp` CRDs. While simple, the advantage for some teams of running the control plane is its authentication/authorization functionality. This component can implement web-standard authentication like [JWT](https://jwt.io/), backed by Kubernetes CRD resources to store basic user information. The result is a highly secure, purpose-built API that cluster operators can deploy specifically for app development teams while locking down their cluster API. We have built a simple prototype of this control plane along with a CLI -- to interact with the control plane, not directly with the cluster as in the previous section -- to conveniently interact with it. The result to app development teams is a familiar [PaaS](https://en.wikipedia.org/wiki/Platform_as_a_service)-like interface to deploy production-quality web apps to Kubernetes. # Appendix ## Architecture Diagram The design above is depicted below: ![arch](https://user-images.githubusercontent.com/70865/100118098-5be26580-2e2a-11eb-9265-bbf7eaa9fefa.png)