# CC: Assignment 3
## Look at your benchmark results. Are they consistent with your expectations, regarding the different virtualization platforms? Explain your answer. What are the main reasons for the differences between the platforms? Answer these questions for all benchmarks:
### CPU
Based on the conducted benchmarks we obtained the highest amount of events per second using the Docker container, followed by the native host. Both hardware virtualization methods achieved notably lower results, which was consistent with our expectations, since both QEMU and KVM come with a significant amount of overhead.
Even though our benchmarks captured the overhead of hardware virtualization in comparison to both native execution and containerization, we noticed a couple of anomalies.
First, the Dockerized process shouldn't present better CPU performance than native execution. Usually, containzerized applications suffer from some computational overhead due to the additional checks performed by the Linux kernel to keep containers isolated. The reason why our benchmark says otherwise could be related to the fact that our host is a virtual private server. As a result, there are little guaratnees regarding consistent system performance (e.g., the VPS hypervisor might be busy doing calculations for another customer).
Second, the use of KVM resulted in slightly worse benchmark scores compared to the hardware virtualization, performed by QEMU. This shouldn't be the case. QEMU's hardware emulation shouldn't be faster than KVM, as KVM is meant to let QEMU utilize hardware extensions to speed up virtualization. After a bit
// (One possible reason for that outcome is that the hypervisor has to translate the instructions meant for the virtual CPU to the physical CPU, which in our case had a bigger impact on the performance than the dynamic binary translation, performed by QEMU, smh) (or maybe something related to the CPU virtualization extension fucking up)
### Memory
Based on the conducted benchmarks the docker container, alongside the native host, obtained the best results. That is consistent with our expectations, since in both cases a direct access to the system memory is granted and no additional overhead needs to be taken into consideration.
Similar to the CPU benchmark, both virtualization methods achieved significantly worse scores, with QEMU outperforming KVM.
### Random disk access
Once again the docker and the host obtained the best results and KVM did not manage to outperform QEMU. The gap between the results in this case is noticeably smaller.
### Sequential disk access
In this benchmark the docker managed to slightly outperform the host and KVM. Surprisingly QEMU obtained the best results.
### Fork
Best results were obtained by the host, followed by the docker. Both virtualization techniques performed significantly worse.
### iperf uplink
We observed the same pattern in this benchmark as well, with the native host and the docker container performing much better than QEMU with and without KVM.