# Implement Eytzinger Binary Search with Ripes
By SC Lin (P76105059)
###### tags: `Computer Architecture` `Term Project` `RISC-V`
## Binary Search & Caching
Binary search is one of the fastest search algorithms, being able to perform searches in O(log n) time (worst case). It is however limited by its nature of random-access. In an ideal world where memory access is as fast as arithmetic instructions this would not have been an issue, but our CPU architectures are built upon cache-based memories. Caching techniques read sequential blocks of data from the memory along with the actual requested data. Especially in binary searches, where the accessing pattern makes major jumps that gets halved with each iteration, we can see that this is not cache friendly. Therefore, an [Eytzinger Binary Search](https://algorithmica.org/en/eytzinger) is proposed as a modified version more suited for modern day architecture.
## Eytzinger Binary Search
The concept is rather simple, suppose the root of the ordered-tree is indexed as 1. All subsequent children of a node at k is indexed as 2k and 2k+1. As shown below:

Eytzinger indexing. [source](https://algorithmica.org/en/eytzinger)
We can see that the values used in each compare stage (same colour) are grouped in an contiguous arrangement, thereby improving cache performance.
### Eytzinger tranformation
Thus, we have an additional function that performs an Eytzinger transform on an array. The code is adapted from this [C code](https://algorithmica.org/en/eytzinger) to assembly:
```
EYTZINGER:
li a4 0
li a5 4
loop: #calculate where it belongs
blt a2 a5 end
# 1st child call
addi sp sp -8
sw ra 0(sp)
sw a5 4(sp)
slli a5 a5 1
jal loop
lw a5 4(sp)
addi sp sp 4
# assign array elements
add a6 a4 a0 # arr[k]
lw a7 (0)a6
# slli a6 a5 2
add a6 a5 a3 # eyt[i]
sw a7 (0)a6
addi a4 a4 4 # i++
# 2nd child call
addi sp sp -4
sw a5 4(sp)
slli a5 a5 1
addi a5 a5 4
jal loop
lw ra 0(sp)
lw a5 4(sp)
addi sp sp 8
end:
ret
```
The [author](https://algorithmica.org/en/eytzinger) also points out that "despite being recursive, this is actually a really fast implementation as all memory reads are sequential." Although one can observe that the write is still non-sequential.
Testing the algorithm, our remapped array is:

Which corresponds to the binary tree shown.
We then implement the binary search part of the algorithm. Also based on the [C code of the same author](https://algorithmica.org/en/eytzinger). The original binary iterations are like-wise converted to the Eytzinger form, where the original left-right subintervals are replaced with our Eytzinger version of 2k or 2k+1.
```
BINARY:
li a0 4 # k = 1
loop_bin:
blt a2 a0 end_bin
slli a5 a0 1 # 2*k
add a6 a0 a3 # eyt[k]
lw a7 0(a6)
sub a6 a7 a1 # eyt[k] < x
srli a6 a6 31 # get sign
slli a6 a6 2
add a0 a5 a6 # k = 2 * k + (b[k] < x)
j loop_bin
end_bin:
srli a0 a0 2 # remove memory offset
ffs:
andi a6 a4 0x1
beq a6 x0 end_ffs
srli a4 a4 1
j ffs
end_ffs:
ret
```
The author optimised the code to be as CPU friendly as possible, by removing the branching present in the original binary-search. Which makes sense as the branching pattern is random based upon the search key, and thus won't benefit from branch predictions. Since the Eytzinger version of the branch only differs by a +1, the author proposed that we can replaced the `+1` with `+(eyt[k] < x)`, which, while consituting most of the code, avoids branching and improves pipelining. The outer while-loop does benefit from branch prediction as it will always be not-taken (in our assembly code) until the final iteration.
```
.data
arr: .word 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
len: .word 15*4
eyt: .word 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
delim: .string "\n"
.text
main:
la s0 arr
lw a2 len
la a3 eyt
jal EYTZINGER
li a1 1
li s1 16
loop_main: # search index of each number
jal BINARY
li a7 34
ecall
la a0 delim
li a7 4
ecall
addi a1 a1 1
blt a1 s1 loop_main
li a7, 10
ecall
```
Running the search with our binary search from values 1-15 with the code above, we have the correct indices returned by the code:
```
0x0000
0x0001
0x0002
0x0003
0x0004
0x0005
0x0006
0x0007
0x0008
0x0009
0x000a
0x000b
0x000c
0x000d
0x000e
Program exited with code: 0
```
### Cache Perfromance
We then consider the caching performances. The first ~500 cycles corresponds to the Eytzinger tranformation:

The cache hit rate is already reaching +90%
When we perform a single binary search:

The hit rate goes over 98%
Further binary searches have an average hit rate of 100%:

But this is most likely due to the small size of the array. So let us increase the size of the array:
|Size| 31 | 63 | 127 |255| 511 |
|----|----|----|-----|---|-----|
|Hit rate (1st time)| 98 | 96 | 95 | 92 | 92 |
The hit rate seems to average out at around 92% with increasing size, which is still very good. Compared to the results of a [Diffuse the Bomb](https://hackmd.io/@arthur-chang/S1AM0D8Ht) implementation, which also follows a binary divide-and-conquer approach. Their implementation struggles to even achieve 90% at even small array sizes (<=30), this suggests that our Eytzinger approach is indeed more cache friendly. Below are different caching methods (in-order) for the 127-element array: direct mapping, fully-associative and 2-way associative.



The associative mappings perfroms similarly while direct mapping suffers with larger arrays, but only for the Eytzinger sequence (noisy zigzags). Binary search-wise performance is roughly the same. Furthermore, the transformation takes up a bulk of the cycles with its O(n) time-complexity. This suggests that unless that data is already stored in Eytzinger form (or only needs to be processed once for many binary searches), it may not be practical to use the Eytzinger + Binary search for large data.

(above) 16 searches only occur after the red line, before which is a Eytzinger transform of 127 elements.
#### Cache Visualisation
An example cache miss:

An example cache hit:

We see in the corresponding access address (in-order): tag (31~8), index (7~4), block(3~2) and offset (1~0)