owned this note
owned this note
Published
Linked with GitHub
# Lab3 - OpenCL for U50 PCIe FPGA
## Table of Content
:::warning
[toc]
:::
## Resource
[bol-edu github:course-lab_3](https://github.com/bol-edu/course-lab_3)
## Environment
Ubuntu : 20.04
Vitis : 2022.1
Vivado : 2022.1
## Before the Experiment
### Alveo u50 platform/XRT installation
In `Ubuntu 20.04` and `Vitis 2022.1` environment, the required resource can be downloaded through the following links.
After downloading and extracting all three files, execute the installation one by one.
The downlaod URLs and the installation commands are as follows.
### Download URLs
[Package Link](https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/alveo/u50.html)
### Installation Command (Install in order)
```
cd ~/Desktop
sudo apt install ./xrt_202210.2.13.466_20.04-amd64-xrt.deb
sudo apt install ./xilinx-u50-gen3x16-xdma-5-202210-1-dev_1-3499627_all.deb
sudo apt install ./xilinx-u50-gen3x16-xdma-validate_5-3499627_all.deb
sudo apt install ./xilinx-sc-fw-u50_5.2.18-1.bf9ba46_all.deb
sudo apt install ./xilinx-cmc-u50_1.0.40-3398385_all.deb
sudo apt install ./xilinx-u50-gen3x16-xdma-base_5-3499627_all.deb
```
### Bash Shell Setting
After installing the Vitis tools as mentioned previously, the files will be placed in the `/tools/Xilinx` directory in the root of the Linux system, which is different from the `/opt/Xilinx` directory specified in the documentation. Be sure to pay special attention when verifying the `source` path.

### Install Additional Packages
Download `opencl-headers` and `gcc-multilib` for building the project using the following commands:
```
sudo apt install opencl-headers
sudo apt install gcc-multilib
```
## Experiment Block Diagram

Lab3 is a simple lab has the following features :
* Multiple kernels - KVConsAdd, KpB, KA, Kb, KCalc
* Multiple buggers
* DataIn_1, DataIn2, DataIn3 located in host memory and moved to FPGA Global Memory
* BUF_KpB, BUF_KA, BUF_KB generated in FPGA Global Memory
* RES generated in FPGA Global Memory and transferred to Host Memory
* Command Queue allowed for out-of-order operations
## Code Explaination
### Kernel Code
:::info
#### KVConstAdd
```clike=
void KVConstAdd(unsigned int Arg, int *V)
{
#pragma HLS INTERFACE m_axi port=V offset=slave bundle=gmem
#pragma HLS INTERFACE s_axilite port=Arg bundle=control
#pragma HLS INTERFACE s_axilite port=V bundle=control
#pragma HLS INTERFACE s_axilite port=return bundle=control
loop_st_1:
for (int i=0; i<SIZE_DataIn_1; i++) {
V[i] = V[i] + Arg;
}
}
```
The above kernel function adds a constant to each element in the `V` array. The `V` array uses the AXI Master Protocol for data transfer, while the base address of the `V` array uses the AXI Lite Protocol. The argument `Arg` used for computation and the control signal returned after the operation both use the AXI Lite Protocol as well. The AXI Lite Protocol configuration uses the same bundle.
:::
:::warning
#### KpB
##### Version 1 - Data access directly from AXI Master
```clike=
void KpB(int *A, int *B, int *R) {
int TMP_RES[SIZE_BUF_KpB];
#pragma HLS INTERFACE s_axilite port=A bundle=control
#pragma HLS INTERFACE s_axilite port=B bundle=control
#pragma HLS INTERFACE s_axilite port=R bundle=control
#pragma HLS INTERFACE s_axilite port=return bundle=control
#pragma HLS INTERFACE m_axi port=A offset=slave bundle=gmem
#pragma HLS INTERFACE m_axi port=B offset=slave bundle=gmem
#pragma HLS INTERFACE m_axi port=R offset=slave bundle=gmem
for(int i=0; i < SIZE_BUF_KpB; i+=1) {
TMP_RES[i] = A[i] + B[i];
}
for(int i=0; i < SIZE_BUF_KpB; i+=1) {
R[i] = TMP_RES[i] % 3;
}
}
```
The above kernel function adds each element of array `A` and array `B` and stores the result in the `TMP_RES` array. Then, each element in the `TMP_RES` array is divided by 3, and the remainder is stored in array `R`. The input arrays `A` and `B` and the output array `R` all use the AXI Master Protocol for data transfer. The returned control signal and the base addresses of all arrays use the AXI Lite Protocol for data transfer. Additionally, the ports using the AXI Master Protocol are configured to use the same bundle, and the ports using the AXI Lite Protocol are configured to use the same bundle as well.
##### Version 2 - Data access from the buffer create in kernel function & AXI Master Burst Transfer
```clike=
void KpB(int *A, int *B, int *R) {
int TMP_RES[SIZE_BUF_KpB];
#pragma HLS INTERFACE s_axilite port=A bundle=control
#pragma HLS INTERFACE s_axilite port=B bundle=control
#pragma HLS INTERFACE s_axilite port=R bundle=control
#pragma HLS INTERFACE s_axilite port=return bundle=control
//---------------------------- Different Part--------------------------------
#pragma HLS dataflow
#pragma HLS INTERFACE m_axi port=A offset=slave bundle=gmem max_read_burst_length=256 max_write_burst_length=256
#pragma HLS INTERFACE m_axi port=B offset=slave bundle=gmem max_read_burst_length=256 max_write_burst_length=256
#pragma HLS INTERFACE m_axi port=R offset=slave bundle=gmem max_read_burst_length=256 max_write_burst_length=256
int A_tmp[SIZE_BUF_KpB], B_tmp[SIZE_BUF_KpB], R_tmp[SIZE_BUF_KpB];
memcpy(A_tmp,A,SIZE_BUF_KpB * sizeof (int));
memcpy(B_tmp,B,SIZE_BUF_KpB * sizeof (int));
for(int i=0; i < SIZE_BUF_KpB; i+=1) {
TMP_RES[i] = A_tmp[i] + B_tmp[i];
}
for(int i=0; i < SIZE_BUF_KpB; i+=1) {
R_tmp[i] = TMP_RES[i] % 3;
}
memcpy(R,R_tmp,SIZE_BUF_KpB * sizeof (int));
//---------------------------------------------------------------------------
}
```
The optimized version of `KpB` uses the `#pragma HLS dataflow` directive in HLS, enabling the HLS tool to execute different data processing stages concurrently, which reduces latency and improves overall performance. Additionally, the AXI Master Burst Transfer feature is enabled, allowing multiple data items to be transferred in a single AXI Master transaction. A buffer is also allocated to facilitate data transfer using the AXI Master Protocol.
:::
:::success
#### KA
##### Version 1 - Kernel without array partition
```clike=
void KA(int *A, int *R)
{
#pragma HLS INTERFACE m_axi port=A offset=slave bundle=gmem
#pragma HLS INTERFACE m_axi port=R offset=slave bundle=gmem
#pragma HLS INTERFACE s_axilite port=A bundle=control
#pragma HLS INTERFACE s_axilite port=R bundle=control
#pragma HLS INTERFACE s_axilite port=return bundle=control
loop_st_1:
for (int i=0; i<SIZE_BUF_KA; i++) {
R[i] = A[2*i]*3 + A[2*i+1]*5 + A[2*i+2]*7 + A[2*i+3]*9;
}
}
```
The above kernel function sequentially reads data from the `A` array, which is written using the AXI Master protocol, and writes the summation result back to the `R` array, which is then returned to the host function. In the Vitis Synthesis Tool, the BRAM has only two read ports available for data access. Since calculating one result requires two cycles to complete data access, this may become a performance bottleneck for the kernel.
##### Version 2 - Kernel with array partition
```clike=
#define USE_BURST_TRANSFER_ARRAY_PARTITION
void KA(int *A, int *R)
{
#pragma HLS INTERFACE m_axi port=A offset=slave bundle=gmem
#pragma HLS INTERFACE m_axi port=R offset=slave bundle=gmem
#pragma HLS INTERFACE s_axilite port=A bundle=control
#pragma HLS INTERFACE s_axilite port=R bundle=control
#pragma HLS INTERFACE s_axilite port=return bundle=control
int TMP_A[SIZE_DataIn_1];
#pragma HLS array_partition variable=TMP_A cyclic factor=4 dim=1
for (int i=0; i<SIZE_DataIn_1; i++) TMP_A[i]= A[i];
loop_st_1:
for (int i=0; i<SIZE_BUF_KA; i++) {
R[i] = TMP_A[2*i]*3 + TMP_A[2*i+1]*5 + TMP_A[2*i+2]*7 + TMP_A[2*i+3]*9;
}
}
```
The optimized version of `KA` uses the `#pragma HLS array_partition variable=TMP_A cyclic factor=4 dim=1` directive in HLS to partition the array along the horizontal dimension. This means that indices with the same remainder when divided by the cyclic factor are grouped together. By partitioning the array into four segments, data access efficiency can be improved, which contributes to optimizing the overall kernel performance.
:::
:::warning
#### KB
##### Version 1 - Kernel without array partition
```clike=
void KB(int *A, int *R)
{
#pragma HLS INTERFACE m_axi port=A offset=slave bundle=gmem
#pragma HLS INTERFACE m_axi port=R offset=slave bundle=gmem
#pragma HLS INTERFACE s_axilite port=A bundle=control
#pragma HLS INTERFACE s_axilite port=R bundle=control
#pragma HLS INTERFACE s_axilite port=return bundle=control
int val;
int TMP_BUF[SIZE_BUF_KpB];
for (int i=0; i<SIZE_BUF_KpB; i++) {
TMP_BUF[i] = A[i]+10;
}
for (int i=0; i<SIZE_BUF_KB; i++) {
val = TMP_BUF[i]*3 + TMP_BUF[i+SIZE_RES]*5 + TMP_BUF[i+2*SIZE_RES]*7;
R[i] = val;
}
}
```
##### Version 2 - Kernel with array partition
```clike=
#define USE_BURST_TRANSFER_ARRAY_PARTITION
void KB(int *A, int *R)
{
#pragma HLS INTERFACE m_axi port=A offset=slave bundle=gmem
#pragma HLS INTERFACE m_axi port=R offset=slave bundle=gmem
#pragma HLS INTERFACE s_axilite port=A bundle=control
#pragma HLS INTERFACE s_axilite port=R bundle=control
#pragma HLS INTERFACE s_axilite port=return bundle=control
int val;
int TMP_BUF[SIZE_BUF_KpB];
#pragma HLS array_partition variable=TMP_BUF block factor=3 dim=1
for (int i=0; i<SIZE_BUF_KpB; i++) {
TMP_BUF[i] = A[i]+10;
}
for (int i=0; i<SIZE_BUF_KB; i++) {
val = TMP_BUF[i]*3 + TMP_BUF[i+SIZE_RES]*5 + TMP_BUF[i+2*SIZE_RES]*7;
R[i] = val;
}
}
```
Similar to Kernel `KA`, which handles array partitioning, this section of the code will focus on the specific aspects of Kernel `KB`. In the partitioned Kernel `KB`, the directive `#pragma HLS array_partition variable=TMP_BUF block factor=3 dim=1` is used to partition the array into three equal segments. This partitioning is implemented to optimize data access for the calculations required for each element in the output `R` array.
:::
:::info
#### KCalc
```clike=
void KCalc(int *A, int *B, int *R)
{
#pragma HLS INTERFACE m_axi port=A offset=slave bundle=gmem
#pragma HLS INTERFACE m_axi port=B offset=slave bundle=gmem
#pragma HLS INTERFACE m_axi port=R offset=slave bundle=gmem
#pragma HLS INTERFACE s_axilite port=A bundle=control
#pragma HLS INTERFACE s_axilite port=B bundle=control
#pragma HLS INTERFACE s_axilite port=R bundle=control
#pragma HLS INTERFACE s_axilite port=return bundle=control
int val1, val2;
int TMP_R[SIZE_RES];
int TMP_A[SIZE_RES], TMP_B[SIZE_RES];
for (int i=0; i<SIZE_RES; i++) {
#pragma HLS PIPELINE
TMP_A[i] = A[i]; TMP_B[i] = B[i];
}
for (int i=0; i<SIZE_RES; i++) {
val1 = (TMP_A[i] - TMP_B[i]) * (TMP_A[i] + TMP_B[i]);
if (val1 >= 0)
val2 = val1 % 3;
else
val2 = (val1 % 6) * val1;
TMP_R[i] = val2;
}
for (int i=0; i<SIZE_RES; i++) {
#pragma HLS PIPELINE
R[i] = TMP_R[i];
}
}
```
In the above kernel function, the process of copying data from one array to another has been optimized. By using the directive `#pragma HLS PIPELINE` in HLS, the HLS tools enable pipelining, allowing this loop to process multiple data items concurrently in hardware, thereby enhancing performance. The pipeline stages are determined by various factors, including the structure of the loop, data dependencies, and available resources. The HLS compiler automatically divides the corresponding pipeline stages for optimization.
:::
### Host Function
1. > Check command line arguments
2. > Detect target platform and target device in a system, create context and command queue.
3. > Create Program and Kernel
4. > Prepare data to run kernel
5. > Set kernel arguments and run the application
6. > Processing output results
7. > Custom Profiling
8. > Release allocated resources
## Hardware Timeline Analysis of Kernels
### Opt1 - Baseline

### Opt2 - Kernel Parallel

### Opt3 - Data Burst

### Opt4 - Array Partition
