# Final Project: Annotate and explain Quiz5
###### tags: `riscv`
[quiz5](https://hackmd.io/@sysprog/arch2020-quiz5)
## Question A
1. We would like to design a cache with an AMAT (average memory access time) of 1.5 cycles. Accessing our cache takes 1 cycle, and on a miss, it takes an additional 10 cycles to retrieve the data from main memory and update the cache. What does our hit ratio need to be in order to achieve the target AMAT?
Ans:
Hit ratio = __ 0.95 __
Annotate:
AMAT = HitTime + MissPenalty * MissRatio
1.5 cycles = 1 cycle + (10 cycles) * (1 – HitRatio)
0.05 = 1 - HitRatio
HitRatio = 0.95
2. The timing for a particular cache is as follows: checking the cache takes 1 cycle. If there is a hit the data is returned to the CPU at the end of the first cycle. If there is a miss, it takes 10 additional cycles to retrieve the word from main memory, store it in the cache, and return it to the CPU. If we want an average memory access time of 1.4 cycles, what is the minimum possible value for the cache’s hit ratio?
Ans:
Minimum possible value of hit ratio: __ 0.96 __
Annotate:
AMAT = HitTime + MissRatio * MissPenalty
1.4 = 1 + (1 - HitRatio) * 10
HitRatio = 0.96
---
## Question B
Consider that P1 is a five-stage pipelined RISC-V processor (IF, DEC, EXE, MEM, WB), where:
- All branches are predicted not-taken.
- Branch decisions are made in the EXE stage.
- The pipeline has full bypassing.
Assume that the loop shown to the right has been running for a while and will be re-executed after this iteration.
1. Given the program below. How many cycles does it take to execute **one iteration** of the loop on this processor?
loop: add x3, x2, x1
slli x2, x2, 1
xor x4, x3, x4
bnez x2, loop
and x11, x2, x1
or x12, x22, x7
sra x13, x23, x8
...
Ans: Cycles per iteration on this processor P1: __ 6 __
Annotate:
Data Hazards will happen between `add x3, x2, x1` and `xor x4, x3, x4`, `slli x2, x2, 1` and `bnez x2, loop`.
There are no load-use hazard so we do not stall the pipeline.

Forwarding


Flushing

2. Now consider that P2 is a variant of the previous five-stage pipelined RISC-V processor (IF, DEC, EXE, MEM, WB), where:
- All branches are predicted not-taken.
- Branch decisions are made in the EXE stage.
- The pipeline has **no bypass paths**.
Assume that our example loop (same as the previous program) has been running for a while and will be re-executed after this iteration. How many cycles does it take to execute one iteration of the loop on this processor?
Ans: Cycles per iteration on processor P2: __ 8 __
Annotate:
Due to no bypass paths, We need to stall the pipeline to make the distance of these instruction that will take the data hazards become 2.

> That is weird about no stall to taken between the data hazarad instructions for the w/o Forwarding 5-Stage RISC-V Processor in Ripes.
> Here we can see there are no stall happen to make the instruction `add x3, x2, x1` write the correct data into register file in WB-stage.
> So the instruction `xor x4, x3, x4` will take the wrong data in the register `x3` to calculate.
> 
---
## Question C
You are designing a four stage RISC-V processor (IF, DEC, EX, WB) that:
- Resolves branches in the EX stage.
- Uses a scoreboard to track and stall on data hazards.
- Initiates accesses to data memory in the EX stage, but **data memory responses are not available until the WB stage**.
This processor has a tight area constraint, and you only have enough area for one bypass path for read-after-write hazards. You are trying to decide whether to put the bypass path from EX to DEC or from WB to DEC. As part of this evaluation, you construct two processors:
- Processor P1: Bypassing from WB to DEC.
- Processor P2: Bypassing from EX to DEC.
These processors ensure that if the instruction in the DEC stage does not have all of its operands ready to read from the register file or read from a bypass path, the instruction in the decode stage is stalled.
You are using the following loop of a program to evaluate the performance of the processor:
loop: lw t0, 0(a0)
addi t1, t0, 1
xor t3, t1, a0
slli t2, t1, 1
blt t0, a0, loop
For the following questions, assume this loop has been running for a long time, and that the processor assumes that the branch will be taken.
How many cycles does this loop take to execute in each of the processors?
1. P1 cycles per iteration:
Ans: Processor P1 cycles per iteration: __ 7 __
Annotate:
Data Hazards is between `lw t0, 0(a0)` and `addi t1, t0, 1`, and also between `addi t1, t0, 1` and `xor t3, t1, a0`.
So we can stall a cycle to wait the lw instruction and pass the data to DEC stage.

2. P2 cycles per iteration:
Ans: Processor P1 cycles per iteration: __ 8 __
Annotate:
Only the forwarding from EX to DEC, we need to wait the data memory responses until WB stage.

---
## Question D
Consider the execution of the following code sequence on a 5-stage pipelined RISC-V processor, which is fully bypassed. The code below iterates over the array shown below to find the index of the element whose value is 0x104. Thus, on the second iteration of the loop, it branches to label done. The instruction unimp signals the end of the program. This processor:
- Always speculates that the next instruction should come from address PC+4.
- Computes branch decisions and jump targets in the EXE stage.
- Annuls incorrectly fetched instructions resulting from branches and jump instructions.
- Receives the results of load instructions (from the data memory) in the WB stage.
The program:
```assembly
. = 0x0
/* x10 holds the value we are looking for
* x11 holds the address of A[0]
* x13 holds the address of the current data access
* Assume x10 = 0x104; x11 = 0x500; and x13 = 0x500 before executing the loop
*/
loop:
lw x14, 0(x13)
beq x14, x10, done
addi x13, x13, 4
j loop
done:
sub x13, x13, x11
srli x13, x13, 2
unimp
. = 0x500
/* Array data */
.word 0x0000100 /* A[0] */
.word 0x0000104 /* A[1] */
.word 0x0000108 /* A[2] */
.word 0x000010C /* A[3] */
```
1. Are any instructions stalled in the decode (DEC) stage during execution of this loop? If so, list the opcode(s) here. If not, write N/A.
Ans. Instructions stalled in DEC stage: __ beq __
Annotate:
This program 's behavior is similar to the following c code, find the index of element 0x104.
```c=
int i = 0, A[4] = {0x100, 0x104, 0x108, 0x10C};
while(A[++i] != 0x104);
return i;
```

1. There has the data hazard between `lw x14, 0(x13)` and `beq x14, x10, done`, and also between `addi x13, x13, 4` and `sub x13, x13, x11`, `sub x13, x13, x11`.
2. The load-use hazard between `lw x14, 0(x13)` and `beq x14, x10, done`, so we need to stall `2` cycles to wait the data write-back.
3. Because we always speculates that the next instruction should come from address PC+4, so the instruction in IF stage for the cycle 7 is `sub`.
4. Therefore, only `beq` instruction stalled in the DEC stage.
2. How many NOPs, if any, are inserted into the pipeline to handle the stalls required for execution of one iteration of the loop?
Ans. Number of NOPs added to handle stalls: __ 2 __
Annotate:
We need to stall `2` cycle for the load-use hazard in the one iteration of the loop, we can see the `2` NOP instruction called at the cycle `5`.
3. How many cycles does one iteration of the loop (first iteration, i.e. lw -> beq -> addi -> j -> sub -> srli) take?
Ans. Number of cycles for one iteration of the loop: __ 8 __
Annotate:
The number of cycles between two `lw` instruction is 8.
4. How many NOPs, if any, are inserted into the pipeline to handle the final iteration of the loop through the end of the program?
Ans. Number of NOPs added to handle stalls: __ 2 __
Annotate:
| Cycle | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |
| ----- | --- | --- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| IF | lw | beq | addi | addi | addi | j | sub | srli | unimp|
| DEC | NOP | lw | beq | beq | beq | addi | NOP | sub | srli |
| EXE | NOP | NOP | lw | NOP | NOP | beq | NOP | NOP | sub |
| MEM | j | NOP | NOP | lw | NOP | NOP | beq | NOP | NOP |
| WB | addi| j | NOP | NOP | lw | NOP | NOP | beq | NOP |
1. The NOPs instruction at the cycle 9 and the cycle 15 are for branch hazard to flush the mistake instruction.
2. The same as the problem 2, we need to insert 2 NOPs to stall for the load-use hazards at the cycle 13.
5. Which additional instructions, if any, would need to be stalled if the processor did not have a bypass path from the EXE stage but still had one from the MEM and WB stages?
Ans. Instructions stalled due to removal of EXE bypass path: __ srli __ (and __ unimp __ )
Annotate:
| Cycle | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 |
| ----- | --- | --- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| IF | lw | beq | addi | addi | addi | j | sub | srli | unimp| unimp|
| DEC | NOP | lw | beq | beq | beq | addi | NOP | sub | srli | **srli** |
| EXE | NOP | NOP | lw | NOP | NOP | beq | NOP | NOP | sub | NOP |
| MEM | j | NOP | NOP | lw | NOP | NOP | beq | NOP | NOP | sub |
| WB | addi| j | NOP | NOP | lw | NOP | NOP | beq | NOP | NOP |
6. How many additional NOPs, if any, are required to handle the removal of the EXE bypass path?
Ans. Number of NOPs required to handle removal of EXE bypass path: __ 1 __
Annotate:
We need to stall `1` cycle to wait the dependant data calculated and through forwarding from MEM to DEC.
---
## Question E
Consider a direct-mapped cache with 64 total data words with 1 word/cache line.
The program shown below repeatedly executes an inner loop that sums the 16 elements of an array that is
stored starting in location 0x310.
```
. = 0 /* tell assembler to start at address 0 */
outer_loop:
addi x4, x0, 16 // initialize loop index J
mv x1, x0 // x1 holds sum, initially 0
external_loop: // add up elements in array
subi x4, x4, 1 // decrement index
slli x2, x4, 2 // convert to byte offset
lw x3, 0x310(x2) // load value from A[J]
add x1, x1, x3 // add to sum
bne x4, x0, external_loop // loop until all words are summed
j outer_loop // perform test again!
```
The program is executed for many iterations, then a measurement of the cache statistics is made during one iteration through all the code, i.e., starting with the execution of the instruction labeled outer_loop: until just before the next time that instruction is executed.
1. In total, how many instruction fetches occur during one complete iteration of the outer loop? How many data reads?
Ans:
Number of instruction fetches: __ 83 __
Number of data reads: __ 16 __
Annotate:
Instruction fetch = 2 + 5 * 16 + 1 = 83
2 instructions before loop label are executed once. The 5 instructions in the loop are executed 16 times, and j outer_loop is executed once.
The inner loop runs for 16 iterations. lw is called once per iteration.
2. The memory address has 32 bits total which will be divided up into bits for the tag, index, block offset and byte offset.Since there are 64 total words with 1 word/cache line, there must be 64 cache lines. The index bits must fully index into these 64 lines. This requires 6 bits. Each cache line holds one block which is specified to be one word (32 bits or 4 bytes). Since there is only one word in each block, we need no bits (i.e. 0 bits) for this offset. Since each byte has its own address, we need four possible offsets into each block. Thus, the two bits are sufficient for the byte offset. Tag: 24 bits, Index: 6 bits, Block offset: 0 bits, Byte offset: 2 bits. For each memory access, we look at bits [7:2] of the address to determine which index in the cache that particular memory address maps to. Remember that both instructions and the array are stored in memory. Thus, each instruction fetch is a memory access and is also cached along with all “normal” memory accesses in the form of lw instructions. Also note that this program has been executing for multiple cycles already. Thus, many slots in the cache are already populated. We need to figure out which ones will result in cache misses. As we walk through the program for the first time, we will not count initializing the cache contents as cache misses. How many **instruction fetch misses occur during one complete iteration of the outer loop**? How many **data read misses**? Hint: remember that the array starts at address 0x310.
Ans:
Number of instruction fetch misses: __ 4 __
Number of data read misses: __ 4 __
Annotate:
TIO breakdown
- O bits = 2 bits ( $2^2$ bytes/block, i.e. byte offset )
- I bits = 6 bits ( $2^6$ slot in cache = cache size(64 Word) / block size (1 word))
- T bits = 32 - 2 - 6 = 24 bits



Observation in Ripes, there are 16 misses in data cache and 8 misses in instruciton cache.
We access all of data in the array A with the size 16, so it will occur the 16 cache read miss
Due to the direct-mapped cache, the insturction memory address is one-to-one map into the instruction cache.
And it will meet the Compulsory Cache Misses when the program is initial, so there are 8 misses.
But I have no idea why the instruction fetch misses and data read misses both are 4 ...
3. What is the hit ratio measured after one complete iteration of the outer loop?
Ans: Hit ratio: __ 91/99 __
Annotate:
The total number of memory accesses is the sum of the instruction fetches and data reads $83+16=99$.
There were 4 instruction fetch misses and four data read misses, so the total number of cache hits is $83-4+16-4=91$.
The hit ratio is therefore $91/99$.
---
## Question F
Assume, the program shown below is being run on a RISC-V processor with a cache with the following parameters:
- 2-way set-associative
- block size of 2, i.e., 2 data words are stored in each cache line
- total number of data words in the cache is 32
- LRU replacement strategy
```
. = 0x240 // start of program
test:
addi x4, x0, 16 // initialize loop index J to size of array
mv x1, x0 // x1: sum
loop: // add up elements in array
subi x4, x4, 1 // decrement index
slli x2, x4, 2 // convert to byte offset
lw x3, 0x420(x2)// load value from A[J]
add x1, x1, x3 // add to sum
bnez x4, loop // loop N times
j test // perform test again!
// allocate space to hold array
. = 0x420
A: .word A[0]
.word A[1]
```
1. The cache will divide the 32-bit address supplied by the processor into four fields: 2 bits of byte offset, B bits of block offset, L bits of cache line index, and T bits of tag field. Based on the cache parameters given above, what are the appropriate values for B, L, and T?
Ans:
value for B: __ 1 __
value for L: __ 3 __
value for T: __ 26 __
Annotate:
TIO breakdown
- O bits = 2 bits ( $2^2$ bytes/block, i.e. byte offset )
- B bits = 1 bits ( $2^1$ block size )
- L bits = 3 bits ( 32 cache size / 2 block size / 2 way = $2^3$ bits )
- T bits = 32 - 2 - 1 - 3 = 26 bits
2. If the `slli` instruction is resident in a cache line, what will be its cache line index? the value of the tag field for the cache?
Ans:
Cache **line index** for SLLI when resident in cache: __ 1 __
**Tag field** for SLLI when resident in cache: __ 0x9 __
Annotate:
Instruction address = base + instruction_index * 4 = 0x240 + 3*4 = 0x24C
0x24C = 0b**00000000000000000000001001**_**001**_100
3. Given that the code begins at address 0x240 and the array begins at address 0x420, and that there are 16 elements in the array as shown in the code above, list all the values j ($0≤j≤16$) where the location holding the value A[j] will map to the same cache line index as the SLLI instruction in the program.
Ans:
List all j where A[j] have the same cache line index as SLLI: __ 10, 11 ___ (__ index `2j` and `2j+1` __ `, where j = 0, ... , 7`)
Annotate:
A[0] = 0x420, A[1] = 0x424, ..., A[14] = 0x458, A[15] = 0x45C.
The odd and even index will map the same line because a line contains `2` words, so index `2j` and `2j+1` have the same cache line.

---
## Question G
Consider a 2-way set-associative cache where each way has 4 cache lines with a block size of 2 words. Each cache line includes a valid bit (V) and a dirty bit (D), which is used to implement a write-back strategy. The replacement policy is least-recently-used (LRU). The cache is used for both instruction fetch and data (LD,ST) accesses. Please use this cache when answering questions (A) through (D).
1. Using this cache, a particular benchmark program experiences an average memory access time (AMAT) of 1.3 cycles. The access time on a cache hit is 1 cycle; the miss penalty (i.e., additional access time) is 10 cycles. What is the hit ratio when running the benchmark program?
Ans:
Hit ratio for benchmark program: __ 0.97 __
Annotate:
AMAT = hit_time + (1 - hit_ratio) * miss_penalty
1.3 = 1 + (1 - HR) * 10
2. This cache is used to run the following benchmark program. The code starts at memory address 0; the array referenced by the code has its first element at memory address 0x200. First determine the number of memory accesses (both instruction and data) made during each iteration through the loop. Then estimate the steady-state average hit ratio for the program, i.e., the average hit ratio after many iterations through the loop.
```
. = 0
mv x3, x0 // byte index into array
mv x1, x0 // initialize checksum accumulator
loop:
lw x2, 0x200(x3) // load next element of array
slli x1, x1, 1 // shift checksum
addi x1, x1, 1 // increment checksum
add x1, x1, x2 // include data value in checksum
addi x3, x3, 4 // byte index of next array element
slti x2, x3, 1000 // process 250 entries
bnez x2, loop
unimp // halt
. = 0x200
array:
... /* array content */
```
Ans:
Number of memory accesses made during each iteration of the loop: __ 8 __
Estimated steady-state average hit ratio: __ 15/16 __
Annotate:
A cache line contains `2` words, therefore every time we encounter a new element of the array not in the cache, our cache load request will actually fetch two words (i.e. to contiguous elements in our array); thus, in the next iteration of our loop, we will not need to fetch again, as the next element is already in our cache.
The data fetch instruction only (lw) misses every other iteration of the while loop. Since there are 8 total fetches per while loop iteration, we see that one out of every 2 * 8 = 16 instructions is a miss.
Steady-state hit ratio = $(8-0.5)/8=15/16$
---
## Question H
Consider the diagram below for a 2-way set associative cache to be used with our RISC-V processor. Each cache line holds a single 32-bit word of data along with its associated tag and valid bit (0 when the cache line is invalid, 1 when the cache line is valid).

1. The RISC-V produces 32-bit byte addresses, A[31:0]. To ensure the best cache performance, which address bits should be used for the cache index? For the tag field?
Ans:
address bits used for cache index: A[ __ 4:2 __ ]
address bits used for tag field: A[ __ 31:5 __ ]
Annotate:
TIO breakdown
- O bits = 2 bits ( $2^2$ bytes/block, i.e. byte offset )
- B bits = 0 bits ( $2^0$ = 1 word block size )
- I bits = 3 bits ( $2^3$ lines per way implies index )
- T bits = 32 - 2 - 0 - 3 = 27 bits
2. Suppose the processor does a read of location 0x5678. Identify which cache location(s) would be checked to see if that location is in the cache. For each location specify the cache section (P or Q) and row number (0 through 7). E.g., 3P for row 3, section P. If there is a cache hit on this access what would be the contents of the tag data for the cache line that holds the data for this location?
Ans:
cache location(s) checked on access to 0x5678: __ 6P and 6Q __
cache tag data on hit for location 0x5678 (hex): __ 0x2B3 __
Annotate:
`0x5678 = 0b0...0_0101_0110_0111_1000 = 0b0...001010110011_110_00`
Thus, the location in cache section is P and Q that will map to the row 110(6).
Tag = `0b0...001010110011 = 0x2B3`
3. Estimate the approximate cache hit ratio for the following program. Assume the cache is empty before execution begins (all the valid bits are 0) and that an LRU replacement strategy is used. Remember the cache is used for both instruction and data (LD) accesses.
```
. = 0
addi x4, x0, 0x100
mv x1, x0
lui x2, 1 // x2 = 0x1000
loop: lw x3, 0(x4)
addi x4, x4, 4
add x1, x1, x3
addi x2, x2, -1
bnez x2, loop
sw x1, 0x100(x0)
unimp // halt
. = 0x100
source:
. = . + 0x4000 // Set source to 0x100, reserve 0x1000 words
```
Ans:
approximate hit ratio: __ 5/6 __
Annotate:
Assume that the program has been executing for multiple cycles already.
Therefore, in the loop, the instruction from `lw` to `sw` are in the instruction cache, so we have 5 hit for one loop.
But the instruction `lw x3, 0(x4)` exectuted always meet a read cache miss, since there is only a single word per cache line (i.e. no spatial locality).
Thus, each iteration there are 5 instruction fetches (hits) and 1 data fetch (always a miss).
---