# Assignment4 Cache ###### tags: `Computer Architecture` The questions below are referenced from [CS 61C lab7](https://cs61c.org/su20/labs/lab07/). The answers were written by Harry Lu(呂紹樺). ## Exercise 1 - A Couple of Memory Access Scenarios To observe the cache behavior, let's take a look at the pseudocode of the specified program first. ```cpp int array[]; //Assume sizeof(int) == 4 for (k = 0; k < repcount; k++) { // repeat repcount times /* Step through the selected array segment with the given step size. */ for (index = 0; index < arraysize; index += stepsize) { if(option==0) array[index] = 0; // Option 0: One cache access - write else array[index] = array[index] + 1; // Option 1: Two cache accesses - read AND write } } ``` The program iterates through a 1-dimentional array for several times. On the other hand, the parameter`option` decides that CPU will access the memory 1 time or 2 times during a iteration. ### Scenario 1 :::info **Program Parameters:** Array Size (a0): 128 (bytes) Step Size (a1): 8 (words) Rep Count (a2): 4 Option (a3): 0 **Cache Parameters:** Cache Levels: 1 Block Size: 8 Number of Blocks: 4 Enable?: Should be green Placement Policy: Direct Mapped Associativity: 1 (Venus won’t let you change this, why?) Block Replacement Policy: LRU ::: > **Note1: The unit of a block size** > Since I have no clue about what the unit of a block size is, I assume that the unit of the block size to be a byte (not a word). > > **Note2: Settings on Ripes** > In Ripes, size of a block will always be 4 bytes(by default). So if we want to have a 8 bytes block in Ripes, we need to set the number of blocks in each line to be 2. And then we consider these 2 blocks to be 1 block. > ![](https://i.imgur.com/5CKmz90.png) > As the figure above, the TIO (Tag, Index, Offset) Breakdown in the`Cache indexing breakdown` section shows that there are 3 bits of offset(green and black retangulars). That is, every line contains 8 bytes data. #### Tasks * **What combination of parameters is producing the hit rate you observe?** **Ans:** The hit rate observed in this case is 0. ![](https://i.imgur.com/e6GzanV.png) Array iteration: 4 memory access (128/32 = 4) Repeat iteration: 4 times Total memory access: 16 times Since each memory access will access to an address which always maps to a same cache block. ![](https://i.imgur.com/SYRkTw5.jpg) As the figure above, the index and offet part of every accessed address are the same. This leads to that one data block in cache will always be replaced whereas the others are not. Therefore, there are 16 cache misses and 0 hit. * **What is our hit rate if we increase Rep Count arbitrarily? Why?** **Ans:** In this case, no matter what number we change the Rep Count to, the hit rate will not increase. Because the stepsize is 8 words, every iteration only change the `tag` part of the memory address. This makes CPU always try to access the same block of the cache. * **How could we modify one program parameter to increase our hit rate?** **Ans:** We can decrease the stepsize. For example, decrease it to 1 word. By doing that, every address of memory will be mapped to the whole 4 blocks of cache. Then the hit rate will increase to 0.5. ### Scenario 2 :::info **Program Parameters:** Array Size (a0): 256 (bytes) Step Size (a1): 2 Rep Count (a2): 1 Option (a3): 1 **Cache Parameters:** Cache Levels: 1 Block Size: 16 Number of Blocks: 16 Enable?: Should be green Placement Policy: N-Way Set Associative Associativity: 4 Block Replacement Policy: LRU ::: > **Note3: Cache configuration** > ![](https://i.imgur.com/j63BXWG.jpg) #### Tasks * **How many memory accesses are there per iteration of the inner loop? (not the one involving repcount).** **Ans:** Since every iteration executes the code below: ```cpp= array[index] = array[index] + 1; ``` Executing the code involving 1 memory read and 1 memory write. There are 2 memory accesses for each iteration of the inner loop. * **What is the repeating hit/miss pattern? WHY? (Hint: it repeats every 4 accesses).** **Ans:** For every 4 memory accesses, since each accessed address has same index, CPU will always access the same cache line (set). So there will be 1 miss and 3 following hits as figure below. ![](https://i.imgur.com/OZALsHg.jpg) After 4 memory accesses, the CPU starts to accesses another cache line (set). Then it repeats the hit/miss pattern. * **This should follow very straightforwardly from the above question: Explain the hit rate in terms of the hit/miss pattern.** **Ans:** For every 4 memory accesses, there will be 1 miss and 3 following hits. So the hit rate is 0.75. * **Keeping everything else the same, what happens to our hit rate as Rep Count goes to infinity? Why? Try it out by changing the appropriate program parameter and letting the code run!** **Ans:** Hit rate will approach to 1 as Rep Count goes to infinity. After 16 complusory cache misses, the cache have been updated, so every later memory access will have cache hit. That is, if the Rep Count goes to infinity, the number of cache hits also go to infinity. ### Scenario 3 :::info **Program Parameters:** Array Size (a0): 128 (bytes) Step Size (a1): 1 Rep Count (a2): 1 Option (a3): 0 Cache Levels: 2 **L1 cache** Block Size: 8 Number of Blocks: 8 Enable?: Should be green Placement Policy: Direct Mapped Associativity: 1 Block Replacement Policy: LRU **L2 cache** Block Size: 8 Number of Blocks: 16 Enable?: Should be green Placement Policy: Direct Mapped Associativity: 1 Block Replacement Policy: LRU ::: #### Tasks Since 2 level cache are not supported by Ripes, these tasks will be done on Venus. The followings are the result. * L1 cache result: ![](https://i.imgur.com/ysG4o4Z.png) * L2 cache result: ![](https://i.imgur.com/BWjOksh.png) * **What is the hit rate of our L1 cache? Our L2 cache? Overall?** **Ans:** L1 cache hit rate: 50% L2 cache hit rate: 0% Overall hit rate: 50% * **How many accesses do we have to the L1 cache total? How many of them are misses?** **Ans:** There are 32 accesses to the L1 cache, 16 of them are misses. * **How many accesses do we have to the L2 cache total? How does this relate to the L1 cache (think about what the L1 cache has to do in order to make us access the L2 cache)?** **Ans:** There are 16 accesses to the L2 cache, all of them are misses. When L1 cache miss occurs, the CPU tries to access L2 cache. So these 16 accesses to L2 cache are actually 16 misses from L1 cache. * **What program parameter would allow us to increase our L2 hit rate, but keep our L1 hit rate the same? Why?** **Ans:** Let's look into the reason why misses happens. In the first 16 accesses, there are a compulsory miss and a following hit repeats for 8 times. Look at the picture below, there are total 8 compulsory misses. ![](https://i.imgur.com/Q99k6lS.png) In the last 16 accesses, there are a conflict miss and a following hit repeats for 8 times. Look at the picture below, there are total 8 conflict misses. ![](https://i.imgur.com/cSYtzdy.png) Since these L2 cache miss are all compulsory misses, we can increase the array size. By doing this, the next L1 cache miss will be hit on L2 cache. Another choice is to increase Rep Count, this all increase the L2 hit rate. * **What happens to our hit rates for L1 and L2 as we slowly increase the number of blocks in L1? What about L1 block size?** **Ans:** As we increase the number of blocks in L1, the number of conflict miss decreases but not for the compulsory miss. So the hit rate will decrease to a lower level but not to 0. As we increase the L1 block size, the number of compulsory miss decreases. However, the conflict miss may increase. So the hit rate will decrease to a lower level and then increase if we keep increasing the block size. ## Exercise 2 - Loop Ordering and Matrix Multiplication To compare the performance difference between different loop order of matrix multiplication, let’s take a look at these 6 variants of implementation first. *** ```cpp= void multMat1( int n, float *A, float *B, float *C ) { int i,j,k; /* This is ijk loop order. */ for( i = 0; i < n; i++ ) for( j = 0; j < n; j++ ) for( k = 0; k < n; k++ ) C[i+j*n] += A[i+k*n]*B[k+j*n]; } ``` Take n=2 for example, these functions multiply two 2x2 matrices and return a 2x2 matrix. The order of elements in a 2x2 matrix being arranged to an array is like: ![](https://i.imgur.com/Br2r2aj.png) When function `multMat1()` is called, the order that elements of array A, B, C are accessed can be shown as following: ![](https://i.imgur.com/MVfgIhA.jpg) *** ```cpp= void multMat2( int n, float *A, float *B, float *C ) { int i,j,k; /* This is ikj loop order. */ for( i = 0; i < n; i++ ) for( k = 0; k < n; k++ ) for( j = 0; j < n; j++ ) C[i+j*n] += A[i+k*n]*B[k+j*n]; } ``` When function `multMat2()` is called, the order that elements of array A, B, C are accessed can be shown as following: ![](https://i.imgur.com/TLXNqDF.jpg) *** ```cpp= void multMat3( int n, float *A, float *B, float *C ) { int i,j,k; /* This is jik loop order. */ for( j = 0; j < n; j++ ) for( i = 0; i < n; i++ ) for( k = 0; k < n; k++ ) C[i+j*n] += A[i+k*n]*B[k+j*n]; } ``` When function `multMat3()` is called, the order that elements of array A, B, C are accessed can be shown as following: ![](https://i.imgur.com/KOFhI4n.jpg) *** ```cpp= void multMat4( int n, float *A, float *B, float *C ) { int i,j,k; /* This is jki loop order. */ for( j = 0; j < n; j++ ) for( k = 0; k < n; k++ ) for( i = 0; i < n; i++ ) C[i+j*n] += A[i+k*n]*B[k+j*n]; } ``` When function `multMat4()` is called, the order that elements of array A, B, C are accessed can be shown as following: ![](https://i.imgur.com/c1d1kiI.jpg) *** ```cpp= void multMat5( int n, float *A, float *B, float *C ) { int i,j,k; /* This is kij loop order. */ for( k = 0; k < n; k++ ) for( i = 0; i < n; i++ ) for( j = 0; j < n; j++ ) C[i+j*n] += A[i+k*n]*B[k+j*n]; } ``` When function `multMat5()` is called, the order that elements of array A, B, C are accessed can be shown as following: ![](https://i.imgur.com/OB8Foms.jpg) *** ```cpp= void multMat6( int n, float *A, float *B, float *C ) { int i,j,k; /* This is kji loop order. */ for( k = 0; k < n; k++ ) for( j = 0; j < n; j++ ) for( i = 0; i < n; i++ ) C[i+j*n] += A[i+k*n]*B[k+j*n]; } ``` When function `multMat6()` is called, the order that elements of array A, B, C are accessed can be shown as following: ![](https://i.imgur.com/w0qIt9w.jpg) *** #### Adapting Compiler Explorer generated RISC V assembly code to Ripes Take `multMat1()` function for example, the `Compiler Explorer` generated RISC-V assembly code is like this (click the button to check it out): :::spoiler ```clike= multMat1(int, float*, float*, float*): # @multMat1(int, float*, float*, float*) addi sp, sp, -48 sw ra, 44(sp) # 4-byte Folded Spill sw s0, 40(sp) # 4-byte Folded Spill addi s0, sp, 48 sw a0, -12(s0) sw a1, -16(s0) sw a2, -20(s0) sw a3, -24(s0) mv a0, zero sw a0, -28(s0) j .LBB0_1 .LBB0_1: # =>This Loop Header: Depth=1 lw a0, -28(s0) lw a1, -12(s0) bge a0, a1, .LBB0_12 j .LBB0_2 .LBB0_2: # in Loop: Header=BB0_1 Depth=1 mv a0, zero sw a0, -32(s0) j .LBB0_3 .LBB0_3: # Parent Loop BB0_1 Depth=1 lw a0, -32(s0) lw a1, -12(s0) bge a0, a1, .LBB0_10 j .LBB0_4 .LBB0_4: # in Loop: Header=BB0_3 Depth=2 mv a0, zero sw a0, -36(s0) j .LBB0_5 .LBB0_5: # Parent Loop BB0_1 Depth=1 lw a0, -36(s0) lw a1, -12(s0) bge a0, a1, .LBB0_8 j .LBB0_6 .LBB0_6: # in Loop: Header=BB0_5 Depth=3 lw a0, -16(s0) lw a1, -28(s0) lw a3, -36(s0) lw a4, -12(s0) mul a2, a3, a4 add a2, a2, a1 slli a2, a2, 2 add a0, a0, a2 flw ft0, 0(a0) lw a0, -20(s0) lw a2, -32(s0) mul a2, a2, a4 add a3, a3, a2 slli a3, a3, 2 add a0, a0, a3 flw ft1, 0(a0) fmul.s ft1, ft0, ft1 lw a0, -24(s0) add a1, a1, a2 slli a1, a1, 2 add a0, a0, a1 flw ft0, 0(a0) fadd.s ft0, ft0, ft1 fsw ft0, 0(a0) j .LBB0_7 .LBB0_7: # in Loop: Header=BB0_5 Depth=3 lw a0, -36(s0) addi a0, a0, 1 sw a0, -36(s0) j .LBB0_5 .LBB0_8: # in Loop: Header=BB0_3 Depth=2 j .LBB0_9 .LBB0_9: # in Loop: Header=BB0_3 Depth=2 lw a0, -32(s0) addi a0, a0, 1 sw a0, -32(s0) j .LBB0_3 .LBB0_10: # in Loop: Header=BB0_1 Depth=1 j .LBB0_11 .LBB0_11: # in Loop: Header=BB0_1 Depth=1 lw a0, -28(s0) addi a0, a0, 1 sw a0, -28(s0) j .LBB0_1 .LBB0_12: lw s0, 40(sp) # 4-byte Folded Reload lw ra, 44(sp) # 4-byte Folded Reload addi sp, sp, 48 ret ``` ::: However, this assembly progarm can't work directly on Ripes. In order to adapt the assembly progarm to Ripes, I did some modification on the assembly progarm as followings: 1. Define arguments in data section: ```clike= .data n: .word 2 # Number of row(and column) of the square matrix arrA: .word 2, 3, 7, 4 arrB: .word 6, 3, 4, 10 arrC: .word 0, 0, 0, 0 .text ... ``` 2. Create a small main function as the entry code: ```clike= main: lw a0, n la a1, arrA #Load address of array la a2, arrB la a3, arrC jal ra, multMat1 #Call multMat1() li a7, 10 #Halt the simulator ecall ``` 3. Replacing floating point instructions to integer instructions like this: ```clike= .LBB0_6: # in Loop: Header=BB0_5 Depth=3 a0:k a1 lw a0, -16(s0) lw a1, -28(s0) lw a3, -36(s0) lw a4, -12(s0) mul a2, a3, a4 add a2, a2, a1 slli a2, a2, 2 add a0, a0, a2 lw t0, 0(a0) # <-Change flw ft0, 0(a0) to this lw a0, -20(s0) lw a2, -32(s0) mul a2, a2, a4 add a3, a3, a2 slli a3, a3, 2 add a0, a0, a3 lw t1, 0(a0) # <-Changed mul t1, t0, t1 # <-Change fmul.s ft1, ft0, ft1 to this lw a0, -24(s0) add a1, a1, a2 slli a1, a1, 2 add a0, a0, a1 lw t0, 0(a0) # <-Changed add t0, t0, t1 # <-Changed sw t0, 0(a0) # <-Changed j .LBB0_7 ``` And thus the following code will be runnable in Ripes: :::spoiler ```clike= .data n: .word 2 # Number of row(and column) of the square matrix arrA: .word 2, 3, 7, 4 # Address at 0x1000 arrB: .word 6, 3, 4, 10 arrC: .word 0, 0, 0, 0 .text main: lw a0, n la a1, arrA la a2, arrB la a3, arrC jal ra, multMat1 li a7, 10 ecall multMat1: # @multMat1(int, float*, float*, float*) addi sp, sp, -48 sw ra, 44(sp) # 4-byte Folded Spill sw s0, 40(sp) # 4-byte Folded Spill addi s0, sp, 48 sw a0, -12(s0) sw a1, -16(s0) sw a2, -20(s0) sw a3, -24(s0) mv a0, zero sw a0, -28(s0) j .LBB0_1 .LBB0_1: # =>This Loop Header: Depth=1 lw a0, -28(s0) lw a1, -12(s0) bge a0, a1, .LBB0_12 j .LBB0_2 .LBB0_2: # in Loop: Header=BB0_1 Depth=1 mv a0, zero sw a0, -32(s0) j .LBB0_3 .LBB0_3: # Parent Loop BB0_1 Depth=1 lw a0, -32(s0) lw a1, -12(s0) bge a0, a1, .LBB0_10 j .LBB0_4 .LBB0_4: # in Loop: Header=BB0_3 Depth=2 mv a0, zero sw a0, -36(s0) j .LBB0_5 .LBB0_5: # Parent Loop BB0_1 Depth=1 lw a0, -36(s0) lw a1, -12(s0) bge a0, a1, .LBB0_8 j .LBB0_6 .LBB0_6: # in Loop: Header=BB0_5 Depth=3 a0:k a1 lw a0, -16(s0) lw a1, -28(s0) lw a3, -36(s0) lw a4, -12(s0) mul a2, a3, a4 add a2, a2, a1 slli a2, a2, 2 add a0, a0, a2 lw t0, 0(a0) lw a0, -20(s0) lw a2, -32(s0) mul a2, a2, a4 add a3, a3, a2 slli a3, a3, 2 add a0, a0, a3 lw t1, 0(a0) mul t1, t0, t1 lw a0, -24(s0) add a1, a1, a2 slli a1, a1, 2 add a0, a0, a1 lw t0, 0(a0) add t0, t0, t1 sw t0, 0(a0) j .LBB0_7 .LBB0_7: # in Loop: Header=BB0_5 Depth=3 lw a0, -36(s0) addi a0, a0, 1 sw a0, -36(s0) j .LBB0_5 .LBB0_8: # in Loop: Header=BB0_3 Depth=2 j .LBB0_9 .LBB0_9: # in Loop: Header=BB0_3 Depth=2 lw a0, -32(s0) addi a0, a0, 1 sw a0, -32(s0) j .LBB0_3 .LBB0_10: # in Loop: Header=BB0_1 Depth=1 j .LBB0_11 .LBB0_11: # in Loop: Header=BB0_1 Depth=1 lw a0, -28(s0) addi a0, a0, 1 sw a0, -28(s0) j .LBB0_1 .LBB0_12: lw s0, 40(sp) # 4-byte Folded Reload lw ra, 44(sp) # 4-byte Folded Reload addi sp, sp, 48 ret ``` ::: #### Task * **Results** **Ans:** The performance of running matrix multiplication in different iteration order is as following: ``` ijk: n = 1000, 2.102 Gflop/s ikj: n = 1000, 0.177 Gflop/s jik: n = 1000, 2.138 Gflop/s jki: n = 1000, 13.205 Gflop/s kij: n = 1000, 0.163 Gflop/s kji: n = 1000, 7.931 Gflop/s ``` We can see that the iteration order `jki` is much more efficient than others. To confirm this result, I experiment these functions on Ripes. :::info * **Function arguments :** Size of A,B,C matrix : 2x2 matrix * **Cache configuration :** Block size : 4 bytes Number of Blocks : 4 Placement Policy: Direct Mapped ::: | Function name | Hit rate | | ------------- | -------- | | multMat1() | 0.431 | | multMat2() | 0.4195 | | multMat3() | 0.3966 | | multMat4() | 0.3506 | | multMat5() | 0.3851 | | multMat6() | 0.3391 | It seems like the performance of 1000x1000 matrix multiplication can not be explained by the hit rate of a small size matrix multiplication. However, these hit rates give us some clues. * **Which ordering(s) perform best for these 1000-by-1000 matrices? Why?** **Ans:** Ordering `jki` is the best (function `multMat4()`). Although the hit rate in small matrix multiplicaion is lower, it might have more spatial locality in a very large matrix. The memory access pattern looks likes: ![](https://i.imgur.com/w5Szt8D.jpg) Each array has a similar access direction. This exploit more spatial locality when we encounter a large matrix. * **Which ordering(s) perform the worst? Why?** **Ans:** Ordering `ikj` (function `multMat2()`) and `kij` (function `multMat5()`) are the worst. This is the memory access pattern of function `multMat2()` : ![](https://i.imgur.com/w6d9jHP.jpg) And the memory access pattern of function `multMat5()` is like : ![](https://i.imgur.com/OdDRnmv.jpg) These accesses obviously need more distance in memory address for each iteration. In the process of memory access, the lack of order makes a block easy to be replaced in cache. In a large matrix multiplication, this may lead to a lower hit rate. So the performance is also affected. * **How does the way we stride through the matrices with respect to the innermost loop affect performance?** **Ans:** ![](https://i.imgur.com/ksgjRDa.jpg) Obviously, the innermost loop directly affects how far the the next memory address to access is. Compare to the second inner loop and the outter one, it significantly affects the access pattern. So that is why the innermost loop plays a critical roles in the performance. ## Exercise 3 - Cache Blocking and Matrix Transposition The following code is the C code to transpose each submatrix one at a time. Repeat several times to make it a entire matrix transpose. ```clike= // Function transpose_naive() transposes a (blocksize x blocksize) block in a (n x n) martrix. // The start indices in each block must be specified void transpose(int IndexX, int IndexY, int blocksize, int n, int* dst, int* src) { for (int y = IndexY; y < (IndexY + blocksize) ; y++) { for (int x = IndexX; x < (IndexX + blocksize) ; x++) { if (y < n && x < n) { dst[y + x * n] = src[x + y * n]; }// Prevent any operation at index out of bound } } } // Function transpose_blocking() iterate through the matrix by a block. // It transposes each block matrix. void transpose_blocking(int n, int blocksize, int* dst, int* src) { for (int blockX = 0; blockX < n; blockX+=blocksize) { for (int blockY = 0; blockY < n; blockY += blocksize) { transpose(blockX, blockY , blocksize, n, dst, src); } } } ``` ### Part 1 - Changing Array Sizes #### Task **Fix the blocksize to be 20, and run your code with n equal to 100, 1000, 2000, 5000, and 10000.** * **Results** | | n | 100 | 1000 | 2000 | 5000 | 10000 | | --- | -------------- | ---- | ---- | ---- | ---- | ----- | | Execution time (ms) | Naive transpose | 0.004 | 0.838 | 23.996 | 157.556 | 1013.84 | | Execution time (ms) | Transpose with blocking | 0.009 | 0.142 | 7.23 | 50.97 | 212.791 | ![](https://i.imgur.com/8q9RRBm.png) `Transpose with blocking` become more faster than `Naive transpose` when the n grows. * **At what point does cache blocked version of transpose become faster than the non-cache blocked version?** **Ans:** When n=1000, `Transpose with blocking` starts to be faster than `Naive transpose`. * **Why does cache blocking require the matrix to be a certain size before it outperforms the non-cache blocked code?** **Ans:** When the matrix size n is small, `Naive transpose` also takes advantage of locality as `Transpose with blocking` do. In this case, because the cache can hold the majority of matrix elements in blocks, iterate through column by column have a higher hit rate than iterate through block by block. However, when the matrix size getting larger, since each iteration involves two diagonal element accesses, the distance between two memory access will be very far as it access nearby the corner of matrix. If we iterate through column by column, it will makes hit rate decrease because cache needs to replace blocks frequently . Therefore, `Transpose with blocking` will outperforms `Naive transpose` when the matrix size to be larger than a certain size. ### Part 2 - Changing Blocksize #### Task **Fix n to be 10000, and run your code with blocksize equal to 50, 100, 500, 1000, 5000.** * **Results** | | Block size | 50 | 100 | 500 | 1000 | 5000 | | --- | -------------- | ---- | ---- | ---- | ---- | ----- | | Execution time (ms) | Naive transpose | 1027.04 | 898.784 | 904.159 | 967.571 | 1033.54 | | Execution time (ms) | Transpose with blocking | 250.063 | 219.284 | 136.375 | 221.261 | 783.823 | ![](https://i.imgur.com/w0yT39b.png) **Ans:** Since the size of matrix is fixed, the execution time of `Naive transpose` will theoretically be the same in different block size. Overall,`Naive transpose` is more time-consuming than `Transpose with blocking`. * **How does performance change as blocksize increases? Why is this the case?** **Ans:** When the block size grows larger, the execution time of `Transpose with blocking` also increase. Because when the block becomes so big that the distance between two memory access will be very far. This happens when the access is nearby the corner of block. This will lead to a decrease in hit rate. As a result, the performance is also affected.