# sync runtime
## Motivation
Currently the OpenFunction (v0.6.0) has already supported both *async* and *knative* functions, where *async* functions are driven by KEDA + Dapr and *knative* functions are driven by Knative Serving.
In our experience and use of KEDA, we found that KEDA is developing an event-driven framework add-on [http-add-on](https://github.com/kedacore/http-add-on) for handling HTTP requests. Therefore, in order to enrich the implementation of the OpenFunction synchronization function, it is important to increase the diversity of the runtime engine for the synchronization function, while keeping in line with the [async runtime](https://github.com/OpenFunction/OpenFunction/blob/main/docs/concepts/Components.md#openfuncasync), we need to add a *sync runtime* to OpenFunction.
## Goals
- Add a ***sync runtime*** for OpenFunction. The name of the available runtime may need to be adjusted at that time, as suggested below:
- *async*, indicates the asynchronous function, driven by KEDA + Dapr
- *sync*, indicates the synchronization function, driven by KEDA http addon + Dapr
- knative-sync, indicates the asynchronous function,driven by Knative
- Add a built-in entry point for ***sync runtime*** (optional)
- Adjust functions-framework context to support ***sync runtime***
## Proposal
### Overview
The workflows of the envisioned *sync* runtime is as follows:
- Request:
1. When a function is created, a function service and a HTTPScaledObject resource are created for it under the same namespace (HTTPScaledObject will automatically create the corresponding ScaledObject resource).
2. Users access the service through the address exposed by Ingress
3. The requests reache the Interceptor Service through the entry point and is then passed to the Interceptor Deployment
4. Interceptor Deployment delivers the requests to the corresponding Function Service based on the information in the Routing Table, which then receives and processes it.
- Auto scaling:
1. In step 3 of the Request process, the Interceptor Deployment counts the requests, which are periodically collected and aggregated by the External Scaler Deployment
2. External Scaler Deployment reports request counts to KEDA operator
3. KEDA scales the target (Function) that reaches the threshold value according to the scaling options

### Entry point
In this scenario, we need to set up an Ingress for the Interceptor Service of the KEDA http-add-on.
Needs to be discussed.
### Function Service
We need to create the corresponding Service for the Function when it enters the Serving phase, which is roughly as follows.
```yaml
apiVersion: v1
kind: Service
metadata:
name: of-<function name>-internal
namespace: <function namespace>
spec:
selector:
openfunction.io/serving: <function serving name>
ports:
- protocol: TCP
port: 80 # default port
targetPort: <function port>
```
### HTTPScaledObject
We need to create the corresponding HTTPScaledObject for the Function when it enters the Serving phase, which is roughly as follows.
```yaml
kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
name: of-<function name>
namespace: <function namespace>
spec:
host: <function fqdn name> # i.e. "function-a.openfunction.dev"
targetPendingRequests: 100 # default threshold
scaleTargetRef:
deployment: <function serving name>
service: <function service name>
port: <function port>
```
### Functions-framework
#### Context
**.runtime** field, need to add the new runtime type "sync"
#### Implementation
The goal is to implement http service in a similar way as the knative runtime.