# Why you need to try out Fission now! ## Prelude Imagine the following scenario. You are a cloud service provider and you want to expand your services. And there is a new technology on the block; [serverless][1]. (Well, I shouldn't say new since it's already been a few years since it's an industry-standard, but let's not be pedantic) <img src=https://i.imgur.com/s7D0ebe.png style="margin-left:auto;margin-right:auto;display:block;width:400px;height:400px;"> <br /> Now you want to implement serverless into your hardware stack and expand your portfolio, so how do you do it? Unlike containers, where there are nice standards such as [Docker or LXC][2], serverless is more like a concept[^second]. You implement the principles of serverless computing using existing technologies, typically via containers, but it's not mandatory to use just containers (could be Virtual Machines also). So as a cloud service provider, you probably already have a [Kubernetes][3] (K8s) stack, but now you want to provide serverless features to your clients. **_So what would be the easiest way to do it, without making drastic changes?_** <img src=https://i.imgur.com/fj3BaUY.jpg style="margin-left:auto;margin-right:auto;display:block;width:250px;height:250px;"> <br /> Enter [Fission][4]. Simply install Fission on your K8s cluster, and you have serverless capabilities ready for deployment. K8s handles all the autoscaling issues, container creation and deletion, and Fission interacts with the K8s cluster and provides a serverless frontend (accepting functions, providing function routes). Few great things about Fission is that it's [open-source][5], has an active community, and is supported by the [Cloud Native Computing Foundation][6]. ## Sounds great! How do I install and use Fission? The folks of the Fission team have created a wonderful [documentation][9] for Fission, so I won't discuss the [installation][10] here but you can check out the links. Let's just say installation and usage is super-easy. It took me 10 seconds to enter the commands and 2 minutes to finish the installation of Fission on my Kubernetes cluster over at [Linode][11]. ## Great! Now tell me something cool about Fission While you _can_ use Fission to create a FaaS platform, team Fission is expanding its use cases. One of the most exciting features (to me) is **[containers as functions][13]**. The motivation for this feature is that if you already have your application as a container, you can run it directly as a serverless function, instead of extracting the functions out of the source-code. ### Pondering... While thinking about containers as functions, the performance-engineer side of me started thinking... **_"Which has a better cold-start latency? Plain serverless functions or container as functions"_**. This can be quite tricky to answer actually. Fission boasts a low cold-start latency of [100 ms][12] because it uses a pool of **"warm"** containers, which means the containers are already initiated[^first]. Fission also converts functions into containers which are executed via K8s, so from my initial observation it seems that both **_should have similar cold-start latencies_**. This all seems fine, but we should test this hypothesis. _How do we do that?_ I came up with a simple series of tests. This is the basic outline. 1. Create an application. Make two versions of it 2. Plain source code 3. Plain source code but as a Docker container 2. The code returns timestamps regarding 3. when it was invoked (from client) = $T_c$ 4. when it was initiated (inside Fission) = $T_i$ 3. Calculate the value $T_l = T_i - T_c$, where $T_l$ is the initiation latency Lower the value of $T_l$, the better, since it's an indicator of the responsiveness of Fission. :::info **Note**: I'm not calling $T_l$ the cold start latency, since I'm not sure whether my functions/containers are _already loaded or not_. But I've used it interchangeably in the article. It all depends on the implementation, and for a better understanding of cold start in Fission, I need to look at the source code in depth. Also, I could not invoke the functions from the K8s master node since Linode does not allow ssh'ing into the master node. Thus, $T_l$ contains the **one-way transmission delay**. ::: ### The experiment So this is the code with which I will be creating the benchmark {%gist thexavier666/f545493ca89c783c01cd440d7f302aff %} Here's a brief explanation regarding the above benchmarking snippet * `doit.py` just creates a REST API for the function, so that it can be invoked by Fission. * `f_file_json.py` contains the actual function which is called in `doit.py`. It returns the function initiation time as a `json`. This along with `doit.py` are containerized and uploaded to Dockerhub [here](https://hub.docker.com/repository/docker/thexavier666/doitnow), which is then uploaded to my Fission instance. * `f_file_main.py` is the same as `f_file_json.py` except it returns the initiation time as a plain string[^third]. This is uploaded as a normal function. :::success **Quick tip : Here is how you upload your function/container to Fission** Plain functions (Python) ``` function create --name <function name> --env python --code <path of source code> ``` Container as functions ``` fission function run-container --name <function name> --image <container image URL> --port <exposed port number of application> ``` ::: Finally, this the code with which I automated the process of collecting data {%gist thexavier666/2eebec342bb842032792cf47c0b009f2 %} The code simply calls the container function and the standard function (`line 48,51`) stored on my Fission instance and calculates the initiation latency (`line 28`) based on the received data (`line 18`). The data is then dumped into a `csv` file (`line 60`). I ran this code on my local machine, but for better results, it should be running on the master K8s node. ## Observations I ran the experiment 50 times, that is, each function type was called 50 times and the initiation time $T_l$ was calculated for each function type. You can check the [raw data](https://pastebin.com/gcjjMJWQ) here. Here is the [box-plot][11] for each function type. <img src=https://i.imgur.com/Uv20AHd.png style="margin-left:auto;margin-right:auto;display:block;width:621px;height:368px;"> <br /> The results actually make sense with respect to the initial observation that I made; _both should perform similarly since both are executed as containers_. Both the median value and the whisker values are quite close to each other. The tiny circles are the outliers, which means in very rare instances, the initiation value can becomes very high. Here are some numbers regarding the above plot | | Container Function (ms) | Plain Function (ms) | | ------------- | -----------------------:| -------------------:| | Upper whisker | 1018.70 | 1021.34 | | 3rd quartile | 932.14 | 931.25 | | Median | 881.32 | 895.75 | | 1st quartile | 838.64 | 864.53 | | Lower whisker | 767.74 | 784.58 | From the results of this preliminary experiment, we can see that container functions have **slightly** better initiation latency, but we cannot say there is a huge difference between them. However, this is my gut-feeling since even when I was just experimenting on my own (without recording the data), I saw that containers as functions were **slightly more responsive**. If I have to make an educated guess regarding the results, I would say it's because Fission has less to manage when it comes to container as functions. The slight lesser overhead gives it the edge. ## Final thoughts <img src=https://i.imgur.com/9tfQHDF.png style="margin-left:auto;margin-right:auto;display:block;width:200px;height:200px;"> <br /> It would be too hasty to make any bold claims regarding performance since there are lots of other variables to check, such as whether the results hold across different providers, effect of network latency, alternate Fission settings etc. But this can be a nice stepping stone towards determining performance. I hope this was a slightly luminating look on Fission, serverless functions and measuring performance of serverless functions. I think the big-picture regarding Fission is that it can be a great application for any server administrator for quickly providing serverless capabilities to his/her clients. [^first]: But it's not enabled by default as I found out after the experiment. [^second]: There are serverless frameworks, but AFAIK, they are not industry-standard. [^third]: There is a reason for creating two different return types (json and plain string), but for the interest of the time, i've skipped it in the article. [1]: https://blank.com [2]: https://dockerlabs.collabnix.com/beginners/docker/docker-vs-container.html [3]: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/ [4]: https://fission.io/ [5]: https://github.com/fission [6]: https://www.cncf.io [7]: https://en.wikipedia.org/wiki/Serverless_computing [8]: https://www.cloudflare.com/en-in/learning/serverless/what-is-serverless/ [9]: https://fission.io/docs/ [10]: https://fission.io/docs/installation/ [11]: https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51 [12]: https://fission.io/docs/#performance-100msec-cold-start [13]: https://fission.io/docs/usage/function/container-functions/