computer-arch
, jserv
This project is integrated in RISC-V instruction simulator rv32emu
In this project, several knowledge will be mentioned, I will try to explain these computer science concept and give the examples.
Last but not least, optimize the perfornmence of 2D line drawing executing on RV32IM, by replacing floating point arithmetic with fixed-point arithmetic counterpart.
In this project, practicing how to generate an ELF(executable and Linkable Format) file by using GNU Toolchain for RISC-V, and execute the program on rv32emu, a RISC-V instruction set based emulator.
For the purpose of representing decimal points in computer, floating point is one of the method, IEEE 754 is commonly used in modern.
The data space of the standard part will be used to store the position of the decimal point. Since the decimal point is not fixed, it is called floating point
.
In IEEE 754 format, a 32-bit, single precise floating point's value would be present as follow:
1
in this formula represents the first significant bit, it is called hidden bit
, since it do not have to store in the data space.
Sign | Exp | F |
---|---|---|
1 bit | 8 bits | 23 bits |
, hence the origin of would at 0
.
Discussing about data type float
in C, which follows by IEEE 754 32-bit single precision format.
The output would be:
There is an online IEEE 754 converter, provide us with a convenient way to verify how various values are stored in IEEE 754.
Compared with floating point, the representation method of fixed point number is more intuitive. If the range of values to be calculated is known in advance and limited to a smaller range, fixed-point arithmetic can effectively use its bits.
Take 32-bit data space for example:
Since the position of the decimal point is fixed, the interval
between all the representable value is also fixed. take previous example to expalen:
0b0.001 is the smallest interval between the values, that is, 0.125 (), usually the fixed-point number will pre-determine how many digits to use to represent the decimal point, then scale according to the situation.
In essence, fixed-point numbers are an integer data structure, which means that they can just perform fast calculations by using general ALU, without using floating-point operators.
Since the fixed-point number itself does not record the bits of the decimal point position, we can use the Q format to formulate the rules in the begining.
For 32-bit signed fixed point, The Q format can be written as Qm.n:
sign bit | integer bits | decimal bits |
---|---|---|
1 | m | n |
For 32-bit unsigned fixed point, The Q format can be written as UQm.n:
integer bits | decimal bits |
---|---|
m | n |
For their fixed precision we can give an mathmetical expression by n:
I have implemented those important fixed point algorithms for this project, please check custom library qformat.h for souce code
,
.
is fixed point value.
We can think of fixed-point numbers as integers "scaled by times"
.
Suppose two of the fixed point precesion are both :
In add arithmetic, precision wouldn't change after compute, just directly add up two numbers and check overflow.
Since we use the highest bit of the value to represent the sign, we don't need to implement the subtraction.
Multiplication results in double precision, we need to maintain the scaling factor back to single precision.
Because the denominator is a power of two, multiplication by a power of two can be expressed as a left shift << n
in binary, and division by a power of two can be expressed as a right shift >> n
in binary
before rescale the precision we can also concern about rounding, since for every adjacent bit they differ from each other by a factor of two, simply adding one to the n-1th bit can affect the nth bit with rounding logic.
Considering the operational efficiency, generally, the division operation will not be used as much as possible. For the need to divide by a power of 2, the right shift operation is very practical >> 1
.
To verify the performance difference between floating point arithmetic and fixed point arithmetic, in this session I write several simple program to test each arithmetic (+
, *
, /
).
In order to obtain information about the CPU, three sets of RISC-V pseudo-instructions are introduced in The RISC-V Instruction Set Manual: 10.1 Base Counters and Timers:
RDCYCLE
and RDCYCLEH
: CPU cycle counterThe
RDCYCLE
pseudoinstruction reads the low XLEN bits of the cycle CSR which holds a count of the number of clock cycles executed by the processor core on which the hart is running from an arbitrary start time in the past.
RDCYCLEH
is an RV32I instruction that reads bits 63–32 of the same cycle counter
RDTIME
and RDTIMEH
: timerThe
RDTIME
pseudoinstruction reads the low XLEN bits of the time CSR, which counts wall-clock real time that has passed from an arbitrary start time in the past.
RDTIMEH
is an RV32I-only instruction that reads bits 63–32 of the same real-time counter
RDINSTRET
and RDINSTRETH
: instruction counterThe
RDINSTRET
pseudoinstruction reads the low XLEN bits of the instret CSR, which counts the number of instructions retired by this hart from some arbitrary start point in the past.
RDINSTRETH
is an RV32I-only instruction that reads bits 63–32 of the same instruction counter.
Testing program is a for loop repeat 1000 times specific arithmetic, take addition for instance:
Just for testing, disable optimization lable.
In test.s
, insert these pseudo-instructions to get related information.
Though it seems straightforward of floating minus operation in C code, in test.s
, I found that floating arithmetic function call.
Finally, run compare.elf
in rv32emu.
Count | Float | Fixed |
---|---|---|
CYCLE | 67250 | 46477 |
TIME | 6890 | 4492 |
INSTR | 67115 | 46594 |
Count | Float | Fixed |
---|---|---|
CYCLE | 281550 | 229716 |
TIME | 48534 | 22477 |
INSTR | 281685 | 229806 |
Count | Float | Fixed |
---|---|---|
CYCLE | 513897 | 464903 |
TIME | 63314 | 46907 |
INSTR | 514032 | 465011 |
I am not sure what is the unit of time:
Clock rate (CYCLE/TIME) | Float | Fixed |
---|---|---|
Add | 9.761 | 10.347 |
Mul | 5.801 | 10.220 |
Div | 8.117 | 9.911 |
The reason why is uncertain, perhaps this indicates that the critical path between FPU pipelines has a larger delay time?
CPI (CYCLE/INSTR) | Float | Fixed |
---|---|---|
Add | 1.002 | 0.997 |
Mul | 1.000 | 1.000 |
Div | 1.000 | 1.000 |
line.c
to fixed-point arithmetic, and compareAfter adjusting the parameter Q
in qformat.h
, it is found that line.c
can use fixed-point numbers in the highest Q11.20 format for operations without overflow.
The result of the original floating-point arithmetic
Q27.4
Q25.6
Q23.8
Q17.10
Q11.20
Q9.22 (overflow occur)
In addition to the basic arithmetic operations, other arithmetic methods also need to be implemented.
Previous operation would waste Q/2 bits precision. Because I'm shifting left before passing back.
Therefore I fixed the code, and try to avoid the use of 64-bit buffer.
Because the sine and cosine values under the same radian are obtained at the same time, combination can reduce the calculation cost.
and are definite values ( ). Use the half-angle formula to obtain the sine and cosine values of , , …, and then use the difference angle formula to approximate to the target angle.
It is not precise enough to calculate the angle when close to the x and y axis:
So I thought to myself: "Maybe sqrt has some errors when dealing with ", so I modified the numerical result of sqrt approximating 1 to returned 1.
Fortunately, my guess was correct, the ouptut result is almost indistinguishable from the original image.
Count | float (origin) | Fixed (mine) |
---|---|---|
CYCLE | 1732706067 | 2020806114 |
TIME | 112423500 | 76953173 |
INSTR | 1732706209 | 2020806256 |
The execution time of the fixed-point number version is relatively short, but the number of cycles and instructions is large. I am not sure whether it is because of the need for arithmetic function calls.
Count | float (origin) | Fixed (mine) |
---|---|---|
CYCLE | 1652620882 | 1644796854 |
TIME | 95192937 | 73433731 |
INSTR | 1652621024 | 1644796996 |
After basic optimizations, fixed-point arithmetic outperforms floating-point numbers in all three.
Count | float (origin) | Fixed (mine) |
---|---|---|
CYCLE | 1058757473 | 1521690262 |
TIME | 48471768 | 50361062 |
INSTR | 1058757615 | 1521690404 |
Unlike expected, the fast optimized result did not outperform floating-point numbers, probably because I haven't implemented the arithmetic of cos
and sin
in fixed-point numbers yet (at the momemt I had not implemented them).
Later I have checked the performance of my fixed point sqrt function, it is way too slow compared with sqrtf()
Count | Float | Fixed |
---|---|---|
CYCLE | 228442 | 462650 |
For now it is not clear to me whether there is a better algorithm for sqrt in fixed point.
READ: 從 √2 的存在談開平方根的快速運算
Count | float (origin) | Fixed (mine) |
---|---|---|
CYCLE | 1058757473 | 1082171119 |
TIME | 48471768 | 41783101 |
INSTR | 1058757615 | 1082171261 |
First we consider drawing a line on a raster grid where we restrict the allowable slopes of the line to the range , .
If we further restrict the line-drawing routine so that it always increments x as it plots, it becomes clear that, having plotted a point at (x,y), the routine has a severely limited range of options as to where it may put the next point on the line:
At the performance constrain, all calculation must be in integer. Since we assume that , therefore in every step of the algorithm, always toward destination In this situation, the main point of the algorithm is: when does .
Bresenham's algorithm told us that for every step , we can define , therefore is at position .
Since the real position will closer to when (we only can calculate in integer)
Consider of all situation that , Where is the implementation in C:
SDF is a mathematical method used to express the distance of the surface of the object. It divides the object surface into two regions: object interior and object exterior. Distance values inside the object are negative, while distance values outside the object are positive. Such distance values are called "signed distances". SDF usually be used to render objects in 3D environment, do simulation and collision detection, etc.
In this project, we use the concept of SDF to calculate the distance between each pixel and a given line segment:
Because we want the distance from the line segment instead of its straight line equation, constrain such that .
Variable
r
indicates the “sphere of influence” of the line segment. If the pixel is within this range, it will be considered as the interior of the line segment and returns negative distance.
Under the axis-aligned bounding box (AABB) optimization, we only care about the SDF of the bounding box containing the line segment, and do not visit every pixel of the image every time.