# Xen ARM with Virtualization Extensions whitepaper
Available on Xen project [wiki](https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions_whitepaper).
## Xen on ARM
### What is Xen?
Lightweight, high performance, Open Source. Low footprint with less than 90K lines of code. Licensed [GPLv2](http://www.gnu.org/licenses/gpl-2.0.html) with large community of developers. Xen project is hosted by the [LinuxFoundation](http://www.xenproject.org/).
### The Xen Architecture
As a type-1 hypervisor, Xen runs directly on the hardware. It manages and monitors the virtual machines running on top of it. The first VM created by Xen is *Domain 0*, or Dom0, which is privileged and drives the devices on the platform. The responsibilities of Xen include virtualizing CPU, memory, interrupts and timers, and providing the VMs with the needed resources. Dom0 gets assigned devices such as SATA controllers and network cards by Xen, and the hypervisor takes care of remapping MMIO regions and IRQs. Dom0 typically runs Linux, and uses the same device drivers for devices as it would if it were running natively.
In order for unprivileged VMs (DomUs or guests) to access disk, network, etc, Dom0 runs a set of drivers called paravirtualized backends. The OS instances running as DomUs gets access to a set of generic virtual devices by running the corresponding paravirtualized frontend drivers. The same PV backend facilitates multiple PV frontends. This is realized by using a ring protocol over a shared page in memory. Xen provides tools for setting up the initial configuration and communication, and also a mechanism to share additional pages between frontend and backend, and for them to notify each other via SWIs.
It is possible to set up driver domains. They are unprivileged domains with the only purpose to run a certain set of drivers. This allows for disaggregation and componentization of the system. Further isolation. Not for Xen on Arm.
### Xen on ARM: a cleaner architecture
The Xen on ARM port is not the same as x86 Xen. Maturity and experience amongst the developers behind the port helped with creating a cleaner architecture by getting rid of the cruft accumulated during the many years of x86 development. The biggest difference between the two versions is that Xen on ARM does not need any emulation. Emulation interfaces such as QEMU are slow and insecure, with large binary sizes and lots of lines of code. Instead, Xen on ARM exploits virtualization support in hardware as much as possible and using PV interfaces for IO.
Only one type of guest is able to run on Xen on ARM. It does not need any emulation and relies on PV interfaces for IO as early as possible in the boot sequence. It exploits virtualization support in HW as much as possible and does not require invasive changes to the guest OS kernel in order to run.
### Xen on ARM: virtualization extensions
There are three levels of execution provided by the ARM virtualization extensions. They are EL0, which is user/application mode, EL1, kernel mode, and EL2, hypervisor mode. To switch between kernel mode and hypervisor mode, a new instruction, HVC, is introduced to enable a kernel to issue a hypercall to Xen. The MMU supports two stages of translation. The generic timers and the generic interrupt controller (GIC) are virtualization aware.
Xen runs in hypervisor mode (EL2), without ever entering another mode. It uses 2-stage translation in the MMU to assign memory to VMs, and generic timers to receive timer interrupts as weell as injecting timer interrupts and exposing the counter to VMs. The same goes for GIC.
The discoverable hardware is presented to Xen via devicetree, and it assigns all devices it does not use to Dom0 bu remapping the corresponding MMIO regions and interrupts. A flattened devicetree binary is generated for Dom0 to use, that describes exactly the environment exposed to it. This DTB containsthe exact number of vcpus that Xen created for it (may be different from number of pcpus), the exact amount of memory that Xen gave to it (definately less than the amount of physical memory available), the devices that Xen re-assigned to it (at least **one** UART is not assigned to Dom0), and an hypervisor node to advertise the presence of Xen on the platform.
Dom0 boots as it would if it was running natively. The devicetree is used to discover hardware and the correct drivers are loaded. For the interfaces not present, Dom0 does not try to access them. By finding the hypervisor node in the devicetree Dom0 can initialize the PV backends.
### Xen on ARM: code size
There is a table here. Basically, Xen on ARM is small (which is good).
### Porting Xen to a new SoC
If you have a Dom0 running on the SoC it is easy to port Xen. You need one more UART driver.
To debug the interrupts one can look into the function `do_IRQ()` in Xen. All interrupts are taken by Xen through the GIC and routed to `do_IRQ()`. The interrupts are either handled directly by Xen, routed to guests or blacklisted.
### Porting an operating system to Xen on ARM
The only thing needed are a few PV frontend drivers to obtain access to network, disk, console, etc. The PV frontends rely on:
* gran table for page sharing
* XenBus for discovery
* event channels for notifications
Once the OS supports these basic building blocks, the next step is introducing the corresponding PV frontend drivers. There are obviously existing ones in Linux to reuse.
### Mobile platforms and new PV protocols
It is possible to assign a device to only one VM (can be DomU) by remapping the corresponding MMIO regions and interrupts. No PV fe/be drivers needed.