---
title: XRT_Physical-Function_I/O
author: bynx
tags: virtualization
---
# XRT: PCIe I/O
## Overview
The PCIe platform is comprised of two (2) physical partitions: _an
immutable **Shell**_ partition and a _user-compiled **User**_ partition
This design allows end-users to perform _Dynamic Function eXchange_ (Partial Reconfiguration in classic FPGA terminology) in the (well defined) **User** partition while the static **Shell** provides key infrastructure services.
Note: **Alveo shells assume PCIe host (with access to PF0) is part of Root-of-Trust.**
The following features reinforce security of the U30 platform:
- Two (2) physical function (PF) shell designs
- Clearly classified trusted vs untrusted shell peripherals
- Signing of `xclbin`s
- AXI Firewall
- Well-defined compute kernel Execution Model
- No direct access to PCIe TLP from **User** partition
- Treating **User** partition as untrused partition
## User Partition (1/2)
The **User** partition (otherwise known as PR-Region) _contains user compiled binaries_
- XRT uses Dynamic Function Exchange (DFX) to load user-compiled binaries to the **User** partition
## Shell Partition (2/2)
The **Shell** partition provides basic infrastructure [for the Alveo platform] and has has two physical functions:
1. Privileged `PF0`, also called `MGMT PF`
2. Non-privileged `PF1`, also called `USER PF`
It includes a hardened PCIe block which provides physical connectivity to the host PCIe bus via two physical functions. **Shell** is a trusted partition and for all practical purposes should be treated like an ASIC.
During system boot, **Shell** is loaded from the PROM. Once loaded, the Shell cannot be changed.
<figure>
<center>
<img src="https://xilinx.github.io/XRT/2020.2/html/_images/XSA-shell.svg" />
<figcaption>
<i>Figure 2: Data/Control path for Shell and User paritions</i>
</figcaption>
</center>
</figure>
In _Figure 2_, the **Shell** peripherals (blue) can only be accessed from **PF0**, while those shaded violet can be accessed from **PF1** (user physical function 1). From PCIe topology point of view _PF0 owns the device and performs supervisory actions on the device._
**tl;dr: Peripherals shaded blue are trusted while those shaded violet are not.**
Alveo **Shell**s use a specialized IP called PCIe Demux, which routes PCIe traffic destined for **PF0** to **PF0 AXI network** and those destined for PF1 to PF1 AXI network. _It is responsible for the necessary isolation between **PF0** and **PF1**._
Trusted peripherals includes:
- ICAP for bitstream download (DFX)
- CMC for sensors and thermal management
- Clock Wizards for clock scaling
- QSPI Ctrl for PROM access (shell upgrades)
- DFX Isolation
- Firewall controls
- ERT UART
**Shell** provides a control path and a data path to the user-compiled image loaded on **User** partition. The _Firewalls_ in control and data paths protect the **Shell** from un-trusted User partition; e.g., if a slave in DFX has a bug or is malicious the appropriate firewall will step in and protect the **Shell** from the failing slave as soon as a non compliant AXI transaction is placed on AXI bus.
Newer revisions of shell have a feature called _PCIe Slave-Bridge (SB) which provides direct access to host memory from kernels in the User partition_. With this feature, kernels can initiate PCIe burst transfers from **PF1** without direct access to PCIe bus. AXI Firewall (SI) in reverse direction protects PCIe from non-compliant transfers.
### Shell's PF0: Mgmt Physical Function
XRT Linux kernel driver, `xclmgmt`, binds to Mgmt **PF0**. Mgmt **PF0** provides access to **Shell** components responsible for **privileged** operations.
---
#### Talk to me
**`xclmgmt` is the PCIe Kernel Driver for PF0 (Managament Physical Function)**
---
The `xclmgmt` driver is organized into subdevices and handles the following functionality:
- User compiled FPGA image (xclbin) download which involves ICAP (bitstream download) programming, clock scaling and
isolation logic management.
- Loading firmware container called xsabin which contains PLP (for 2 RP platfroms) and firmwares for embedded Microblazes.
The embedded Microblazes perform the functionality of ERT and CMC.
- Access to in-band sensors: temperature, voltage, current, power, fan RPM etc.
- AXI Firewall management in data and control paths. AXI firewalls protect shell and PCIe from untrusted user partition.
- Shell upgrade by grogramming QSPI flash constroller.
- Device reset and recovery upon detecting AXI firewall trips or explicit request from end user.
- Communication with user pf driver `xocl` via hardware mailbox. The protocol is defined Mailbox Inter-domain Communication Protocol
- Interrupt handling for AXI Firewall and Mailbox HW IPs.
- Device DNA (unique ID) discovery and validation.
- DDR and HBM memory ECC handling and reporting.
### Shell's PF1: User Physical Function
XRT Linux kernel driver, `xocl`, binds to User **PF1**. User **PF1** provides access to **Shell** components responsible for **non-privileged** operations. It also provides access to compute units in user partition.
---
#### Talk to me
**`xocl` driver allows user land to perform mmap on multiple entities distinguished by offset**:
- page offset == 0: whole user BAR is mapped
- page offset > 0 and <= 128: one CU reg space is mapped, offset is used as CU index
- page offset >= (4G >> PAGE_SHIFT): one BO is mapped, offset should be obtained from `drm_xocl_map_bo()`
---
The `xocl` driver is organized into subdevices and handles the following functionality, which are exercised using well-defined APIs in `xrt.h` header file:
- Device memory topology discovery and device memory management. The driver provides well-defined abstraction of
buffer objects to the clients.
- XDMA/QDMA memory mapped PCIe DMA engine programming and with easy to use buffer migration API.
- Multi-process aware context management with concurrent access to device by multiple processes.
- Compute unit execution pipeline management with the help of hardware scheduler ERT. If ERT is not available
then scheduling is completely handled by `xocl` driver in software.
- Interrupt handling for PCIe DMA, Compute unit completion and Mailbox messages.
- Setting up of Address-remapper tables for direct access to host memory by kernels compiled into user partition.
Direct access to host memory is enabled by Slave Bridge (SB) in the shell.
- Buffer import and export via Linux DMA-BUF infrastructure.
- PCIe peer-to-peer buffer mapping and sharing over PCIe bus.
- Secure communication infrastructure for exchanging messages with `xclmgnt` driver.
- Memory-to-memory (M2M) programming for moving data between device DDR, PL-RAM and HBM.
## Pass-Through Virtualization
In Pass-through Virtualization deployment model, _management physical function (PF0) is only visible to the host but user physical function (PF1) is visible to the guest VM_. Host considers the guest VM a hostile environment. End users in guest VM may be root and may be running modified implementation of XRT `xocl` driver – XRT `xclmgmt` driver does not trust XRT `xocl` driver.
`xclmgmt` as described before exposes well defined `xclmgnt` (PCIe Management Physical Function) Driver Interfaces to the host. In a good and clean deployment end-users in guest VM interact with standard `xocl` using well defined XOCL (PCIe User Physical Function) Driver Interfaces.
As explained under the Shell section above, by design `xocl` has limited access to violet shaded Shell peripherals. _This ensures that users in guest VM cannot perform any privileged operation like updating flash image or device reset._ A user in guest VM can only perform operations listed under USER PF (PF1) section in XRT and Vitis™ Platform Overview.
A guest VM user can potentially crash a compute unit in User partition, deadlock data path AXI bus or corrupt device memory. If the user has root access he may compromise VM memory. But none of this can bring down the host or the PCIe bus. _Host memory is protected by system IOMMU._ Device reset and recovery is described below.
A user cannot load a malicious `xclbin` on the User partition since `xclbin` downloads are done by `xclmgnt` drive. `xclbin`s are passed on to the host via a plugin based MPD/MSD framework defined in Mailbox Subdevice Driver. Host can add any extra checks necessary to validate `xclbin`s received from guest VM.
This deployment model is ideal for public cloud where host does not trust the guest VM. This is the prevalent deployment model for FaaS operators.
## Mailbox
The Mailbox core is used:
- for bi-directional IPC
- as a link between two, otherwise separate, processor systems
- to generate interrupts between the processors
<figure>
<center>
<img class=center-cropped />
<i>Figure 3: Download flow for of <code>xclbin</code> download via Mailbox for `PF0`, `PF1`</i>
</center>
<style>
.center-cropped {
scale: 0.75;
width: 1000px;
height: 600px;
background-position: center bottom;
background-repeat: no-repeat;
background-image: url('https://xilinx.github.io/XRT/2020.2/html/_images/sw-mailbox-msd-mpd-download.svg');
}
</style>
</figure>
# References
- https://xilinx.github.io/XRT/2020.2/html/index.html
<figure>
<center>
<img src="https://xilinx.github.io/XRT/master/html/_images/XRT-Architecture-Hybrid.svg" />
<figcaption>
<i>Figure _: Alveo U30 PCIe hybrid stack</i>
</figcaption>
</center>
</figure>
> They have hardedned PS subsystem with ARM APUs in the Shell. The PL fabric is exposed as user partition. The devices act as PCIe endpoint to PCIe hosts like x86_64, PPC64LE. They have two physical function architecture identical to other Alveo platforms. On these platforms the ERT subsystem is running on APU.
> The Virtual Machine Monitor (VMM), also known as a hypervisor, creates
> and manages virtual machines. The VMM also enables the sharing of the
> physical I/O devices across the virtual platforms. In a software-based
> virtualization system, the VMM is involved in all datapath transactions
> (software-based switching), which consumes significant CPU bandwidth
> and thereby reduces the system throughput
> On _Alveo PCIe_ platforms _xocl_ driver binds to user physical function and _xclmgmt_ driver binds to management physical function. The ioctls exported by xocl are described in [XOCL (PCIe User Physical Function) Driver Interfaces](https://xilinx.github.io/XRT/2020.2/html/xocl_ioctl.main.html#xocl-ioctl-main-rst) document and ioctls exported by xclmgmt are described in [XCLMGMT (PCIe Management Physical Function) Driver Interfaces](https://xilinx.github.io/XRT/2020.2/html/mgmt-ioctl.main.html#mgmt-ioctl-main-rst) document.
---
```graphviz
digraph hierarchy {
nodesep=0.5
fontsize=30
// PCIe x4 x4 Host
hw [
label="Bare-Metal Host: PCIe x4 x4";
shape="box3d";
width="10";
height="1";
fontsize="24";
fontcolor="white";
style="filled";
fillcolor="#8c008c";
constraint="true";
splines="line";
]
// Physical Function Drivers
"pf0" [
label="PF 0";
shape="box3d";
width="2";
height="4";
fontsize="24";
fontcolor="black";
style="filled";
fillcolor="lightblue";
]
"pf1" [
label="PF 1";
shape="box3d";
width="2";
height="4";
fontsize="24";
fontcolor="black";
style="filled";
fillcolor="lightblue";
]
vmm [
label="KVM (VMM)";
style="filled";
fillcolor="pink";
fontsize=30;
shape="box";
width="10";
height="1";
]
vm1 [
label="VM_1"
style=filled, fillcolor="lightblue", shape="oval", fontsize=28
]
vm2 [
label="VM_2"
style=filled, fillcolor="lightblue", shape="oval", fontsize=28
]
vm3 [
label="..."
style=filled, fillcolor="lightblue", shape="oval", fontsize=28
]
vm4 [
label="VM_n-1"
style=filled, fillcolor="lightblue", shape="oval", fontsize=28
]
vm5 [
label="VM_n"
style=filled, fillcolor="lightblue", shape="oval", fontsize=28
]
subgraph "cluster_pf-driver" {
{rank="source"; hw } //pf1 pf0}
hw -> { "pf0" "pf1" } [
//constraint="false";
//arrowhead=
]
pf0 -> hw [constraint="true" ]
pf1 -> hw [constraint="true" ]
bgcolor="lightyellow"
label="Shell Partition"
}
subgraph "cluster_usr-part" {
{rank="source"; hw vmm}
hw -> "vmm" [
label="The VMM also enables\nsharing of the physical I/O devices\nacross the virtual platforms.";
//constraint="false";
fontsize="24";
]
{rank="same"; vm1 vm2 vm3 vm4 vm5 }
vmm-> { vm1 vm2 vm3 vm4 vm5 }
label = "\nUser Parition"
//Guest OS w/ Virtualized HW Access\n(e.g., vRAM, vGPU, vCPU, etc.)"
labelloc = "b"
bgcolor = "grey"
}
}
```