# 計算機結構_107考古
###### tags: `計算機結構 Computer Architecture note`, `110-1`
# Contents
[TOC]
###### [雄爾頓考古](https://drive.google.com/drive/folders/1ZBme_nT0ZntMnTNXa5t0WzreIr4zvBp4?fbclid=IwAR0RDhWzzBUDaxpjb8YBhsbKezM1KCKVDiCDGrLM_OXl6A_Af7jdC24pygU)
---



---
# 1. Explain the following terms briefly and clearly.
(a) page mode DRAM
- 一個為了提升效能而改善 DRAM 操作的方法(技巧)。
- 利用增加一塊 col size * bit plane 大小的 SRAM 來儲存 row,當 DRAM 上的 row 被 read 進去 SRAM 的暫存器後,若想 output 的 bit 都在同個 row address 上,則每次都從 SRAM 讀取 col address 即可。
(簡而言之,在同個 row 的情況下,可以不用 repeat access row。)
(b) time locality
時間聚集性。當一塊 address 被存取過後,很容易再度被存取到。
(c ) compulsory
A: 一種無法避免的 miss。像是 start cold (第一次存取)。
(d) TLB
A: translate lookaside table
為一個小小的快取記憶體,主要由 valid bit, Virtual Page number(Page#), Physical Page number(frame#)組成,用來追蹤常用到的 logical address,避免存取兩次 main memory 的情形發生。

- 在做 paging 時,若沒有加入快取記憶體(TLB),address 會透過 page table,將 logical address(page#) 轉成 physical address(frame#)(這時候存取一次 main memory),而找到 physical address 後要讀 address 裡面的值,所以又存取一次 main memory。


- 若加入快取記憶體(TLB)後,會先進到 TLB 中找有沒有符合的 logical address 在 TLB 中,若 hit(TLB hit)、就可以直接找到欲求的 physical address。
- 只有在 TLB miss 時,才會到 main memory 找 page table,這樣就能夠減少因為 進到 main memory 找 page table 導致的 CPU 的 stall cycle 更多的情形。

> 下面這張有點怪怪,主要看上方 OS 的 ppt。

(e) bus arbiter
A:匯流排上的多個裝置同時嘗試在匯流排上放置值
沒關係 我寫
I/O我才不寫QQ,OS也不考 打死不寫QQQQQQQQQ。
# 2. Cache memory
(a) Explain what are write-through cache and write-back cache. Compare their respective advantages and disadvantages.
- write-through cache
- write-back cache
(b) What is the purpose of write buffers for write-through cache?
(c ) A write buffer is also useful with write-back cache. Why?
---
# 3. Compare the difference with memory-mapped I/O and special I/O instruction, and write their advantages and disadvantages respectively.
--
**I/O範圍計結不考,但OS會考。所以晚點再寫。**
簡單說一個用 bus 一個用 port。
bus
port要有特殊指令+特殊協定?
---
# 4. What is DMA? Draw a figure to explain your answer.
--
**I/O範圍計結不考,但OS會考。所以晚點再寫。**
---
# 5. Virtual memory
(a) How is a virtual address mapped to a physical address using page table? Draw diagram to help explain it.

(b) How the system will do if there is a page fault?
把目前正在執行的 process 放到 waiting queue,並呼叫 OS 去執行一連串該執行的事情,等 OS 將想要找的 page 從 disk 中搬出來後, input 進來 page table。
--
(c ) When the TLB is hit, is it guaranteed that the processor can find the data in main memory? Why?
是,因為 TLB 是最近已經被參考過的 address,TLB 是 page table 的子集合,而 page table 也是 main memory address 的子集合,只要 page table 可以從 main memory 找到該 logical address 的 physical address,同時也會寫回 TLB 中。所以,只要 TLB 裡面存在的 physical address 一定能夠在 main memory 中找到。
--
(d) There is a cache. Is it possible that TLB hits while cache miss? Why?
CPU指令➜ 查TLB miss➜ 查Page table hit➜ 查Cache miss➜ 從 memory中讀入Cache➜ 更新TLB,data返回CPU完成指令。
---
# 6. Assume that the memory address of a computer system is 32 bit. The memory location is byte addressed. Suppose we use a direct mapped cache for this computer system. The widths in a 32-bit virtual address for byte offset (in block), cache index, and cache tag are 5 bits, 11 bits, and 16 bits, respectively.
#### 我不會QAQQQQQQQQQQQQQQ
(a) How large is the address space? What size is a cache block?
1. (tag bits)+(index bit) = 16+11 = 27 bit
=> $2^{27}Bytes$
2. $2^3$-word/ bolck = $2^3$ Bytes/ block
(b) What is the total size of the direct mapped cache?
A: $2^{11}*(1+16+32*2^3)$
2^11 * (2^5 * 32 +(32-10-5)+1)
index * (byteofblock*adrees+(32bit - tag - oneblockbit)+1)
(c ) Suppose the cache size is fixed and is **fully associative** now. What range of bits of the cache index, and the offset?
(d) Suppose the cache size is fixed and is **4-way associative** now. What are range of bits of the cache index, and the offset?