# 作業系統
### 作業系統基礎知識
#### OS
* 充當用戶和電腦硬體之間的中介
* 管理電腦硬體
#### OS的目標(註:Q1_ANS)
* 執行用戶程式
* 方便用戶解決問題
* 提高電腦系統的使用便利性,高效地利用電腦硬體
### 電腦系統可以大致分為四個部分
硬體 - 提供基本的運算資源。
作業系統 - 控制和協調各種應用程式和使用者對硬體的使用。
應用程式 - 定義系統資源被用於解決使用者運算問題的方式。
使用者 - 人、機器、其他電腦。
### 不同的作業系統,不同的視角:
#### 使用者視角
個人電腦
* 易用性
* 效能
大型電腦(小型電腦)
* 資源利用
工作站
* 連接到其他工作站和伺服器的網路
* 個人易用性與資源利用的平衡
行動裝置(智慧型手機和平板電腦)
* 觸控螢幕和語音識別介面
* 易用性和電池壽命
嵌入式電腦(家用設備、汽車等)
* 無使用者視角
* 無需使用者干預運行。
#### 系統視角
作業系統是資源分配器
* 管理所有資源
* 在有限的資源下為有效和公平的使用做出決策
作業系統是控制程式
* 控制程式執行,防止錯誤和不當使用電腦
* 尤其是輸入輸出裝置
### 作業系統的定義
#### 核心(Kernel):
作業系統上總是運行的一個程序。
#### 系統程式:
與作業系統相關聯,但不一定是核心的一部分。
#### 中介軟體(Middleware)
一組軟體框架,為應用程式開發人員提供附加服務。
如iOS和Android。
#### 應用程式
與系統操作無關的程式。
### 為什麼要學習作業系統呢?
* 幾乎所有的程式碼都在作業系統之上運行。
* 對於正確、高效、有效和安全的編程至關重要。
* 對於那些編寫它們的人、在它們上面編寫程式的人和使用它們的人都很有用。
:::success
### Q1:What are the three main purposes of an operating system?
(註:Q1_ANS)
### Q2:We have stressed the need for an operating system to make efficient use of the computing hardware. When is it appropriate for the operating system to forsake this principle and to “waste” resources? Why is such a system not really wasteful?(我們一直強調作業系統需要有效地利用計算硬體資源,但是在什麼情況下,作業系統可以放棄這個原則並「浪費」資源?為什麼這樣的系統實際上並不浪費資源?)
確保系統的可靠性、安全性或可維護性,改善使用者體驗或提供更高的服務品質。
:::
## 計算機系統組織 Computer System Organization
### 現代的通用電腦結構:
* 包括一個或多個中央處理器(CPU)和設備控制器(device controllers)
* 透過一個共同的匯流排(common bus)連接,提供對共享記憶體(shared memory)的存取
* 每個設備控制器都有一個設備驅動程式(device driver),為作業系統提供對該設備的統一介面。
### I/O 裝置和 CPU 可以同時執行。
* CPU 和裝置控制器會爭奪記憶體週期。
* 每個裝置控制器負責一種特定的裝置類型。
* 每個裝置控制器都有一個本地緩衝區。
* CPU 將數據從主存儲器移動到本地緩衝區中,或者從本地緩衝區移動數據到主存儲器。
* I/O 是從裝置到控制器的本地緩衝區。
### 考慮一個進行I/O操作的程式
* 設備驅動程式載入適當的暫存器
* 設備控制器檢查暫存器以決定操作
* 控制器開始傳輸數據
* 控制器在操作完成後通知設備驅動程式
* 設備驅動程式然後將控制權交給操作系統
:::success
### How does the controller inform the device driver that it has finished its operation?(控制器如何通知設備驅動程式它已完成操作?)
**inerrupted(中斷信號)**
:::
### interrupt(中斷信號)
* OS和硬體互動的重要部分
* 軟體方面:中斷是由執行系統呼叫(監控呼叫)所觸發。
* 硬體方面:中斷是透過系統匯流排向CPU發送信號。
### When CPU is inerrupted(當CPU被中斷時會發生什麼事情呢?)
1. 停止當前的執行工作。
1. 將執行權限轉移到固定的位置。
1. 這個位置包含了中斷服務程式的開始位址。
1. 執行中斷服務程式。
1. 完成中斷服務程式後,回到原本的執行工作繼續進行。
### 如何轉移控制權給適當的中斷服務程序?
#### Invoke a generic routine to examine the interrupt information
* 調用特定的中斷處理程序
* 可以使用指針的表格加速中斷程序
#### Interrupt vector
* 存儲在低地址內存中
* 包含不同設備的中斷服務程序地址
#### 通過中斷請求的唯一索引號來獲取中斷服務程序的地址
### Implementation of Interrupt(實現中斷的方式):
使用有線的中斷請求線。 **???**
感知並檢測信號。
將中斷號碼作為索引讀取。
跳轉到中斷處理程序例程。
* 保存它將更改的狀態。
* 確定原因。
* 處理中斷。
* 恢復狀態。
* 執行 return_from_interrupt 指令,將 CPU 返回執行狀態。
### Handling Interrupt(處理中斷)
* 設備控制器引發一個中斷
* 在中斷請求線上施加信號
* CPU 感知並偵測到中斷
* CPU 分派中斷到中斷處理程序
* 中斷處理程序清除中斷
* 為設備提供服務
==允許 CPU 做出回應非同步事件==
### Computer System Organization(進階的中斷處理)
* 延遲中斷的能力
* 有效的分發中斷
* 多層級中斷。
#### 進階的中斷處理機制由 CPU 和中斷控制器硬體提供
兩種中斷請求線
* 非可屏蔽中斷
* 可屏蔽中斷。
#### 中斷鏈接
* 每個中斷向量中的元素都指向一個中斷處理程序的TOP
* 中斷處理程序依次調用。
* 在效率與表格大小妥協。
#### 中斷優先級
* 中斷向量的索引作為優先級指示。
* e.g Intel處理器有256個條目,其中非可屏蔽中斷為0-31,可屏蔽中斷為32-255。這個方式是權衡表格大小和效率之間的一種妥協方式。
### 儲存結構
#### Main memory
* Rewritable(可重寫)
* 載入執行程式的地方
* 隨機存取記憶體
* 採用動態隨機存取記憶體(DRAM)實作
#### Electrically erasable programmable read-only memory可電擦寫程式唯讀記憶體(EEPROM)
* Non-volatile(非揮發性)
* 速度較慢
* 例如:啟動程式、韌體、序號、硬體資訊
### Storage Definitions and Notation(儲存定義與符號
* bit:0 and 1
* bytes(8 bits): 最小方便的儲存單位
* Word:
* 特定電腦架構的原生資料單位
* 由一個或多個Bytes組成
### Von Neumann architecture(馮紐曼架構)--指令執行週期
* fetch(取指)
* decode(解碼)
* execute(執行)
#### 程式與資料可能不會永久駐留在記憶體中
* 小型主記憶體
* 揮發性
#### 次要儲存裝置
* 主記憶體的延伸
* 非揮發性
* 例如:硬碟驅動器(HDD),非揮發性記憶體裝置(NVM)
* 可作為處理的來源和目的地
### I/O Structure
* Direct Memory Access(DMA)
* 用於高速I/O裝置,能夠以接近記憶體速度傳輸資訊
* 裝置控制器將資料區塊直接從緩衝儲存區傳輸到主記憶體,無需CPU干預
* 每個資料區塊只產生一個中斷,而非每個位元組都產生中斷
:::success
### Direct memory access is used for high-speed I/O devices in order to avoid increasing the CPU’s execution load.
* #### How does the CPU interface with the device to coordinate the transfer?
CPU發出signal指示DMA Controller 從哪個I/O設備讀取數據,將數據傳輸到存儲器中的哪個位置,以及要傳輸多少數據,接下來便由DMA Controller接管。
* #### How does the CPU know when the memory operations are complete?
DMA Controller 發出 interrupt,然後,CPU中斷當前任務並服務該中斷,這涉及檢查DMA傳輸的狀態。
* #### The CPU is allowed to execute other programs while the DMA controller is transferring data. Does this process interfere with the execution of the user programs? If so, describe what forms of interference are caused.
會有cycle stealing的問題。
==Cycle stealing:DMA 向 CPU 竊用機器週期,而直接向記憶體存取資料,因此 CPU 可與週邊裝置的工作同時進行。萬一 CPU 與 DMA 對記憶體存取發生衝突,則給 DMA 較高的優先權 (執行消耗小的先執行)。==
:::
## 電腦系統架構 Computer-System Architecture
#### Symmetric Multiprocessing(SMP)
* 每個處理器都執行所有任務
* 操作系統功能
* 用戶進程
* 每個進程擁有reginster和快取
* 所有處理器共享物理記憶體
* 優點:
* 進程可以同時運行
* 缺點:
* 需要控制負載平衡
* 如果處理器共享某些資料結構,則需要避免使用
:::success
### Many SMP systems have different levels of caches; one level is local to each processing core, and another level is shared among all processing cores. Why are caching systems designed this way?(許多SMP系統具有不同層級的快取記憶體,其中一個層級是每個處理核心的本地快取,另一個層級是所有處理核心共享的。為什麼快取系統要設計成這樣?)
1. 提高系統性能 - 不需要存取較慢的主記憶體。
2. 減少記憶體競爭 - 多處理器系統中,多個處理器可能會嘗試存取相同的記憶體位置,導致競爭和延遲。
### Consider an SMP system similar to the one we have shown. Illustrate with an example how data residing in memory could in fact have a different value in each of the local caches.(請考慮一個與我們所展示的SMP系統類似的系統。請用一個例子說明存儲在內存中的數據實際上如何在每個本地緩存中具有不同的值。)
假設現在有兩個處理器核心A和B,並且它們都要更新某個數據存儲位置的值。當核心A讀取這個位置的值時,它會將這個值從內存中讀取到它的本地高速快取中。接下來,當核心B嘗試讀取相同的數據存儲位置時,該位置的值已經被更新為核心A所寫入的新值,但由於核心B的本地高速快取中仍然保存著原始的數據存儲位置的值,所以核心B會看到一個舊的值。這就是所謂的“快取一致性問題”,需要通過某些機制來確保在多個處理器核心之間維持數據的一致性。
:::
### Trap
Trap = Internal Interrupt + Software Interrupt
* 外部中斷(External Interrupt): CPU 外的週邊元件所引起的。(I/O Complete Interrupt, I/O Device error)
* 內部中斷(Internal Interrupt): 不合法的用法所引起的,CPU 本身所引發。(Debug、Divide-by-zero、overflow)
* 軟體中斷(Software Interrupt):使用者程式在執行時,若需要 OS 提供服務時,會藉由 System Call 來呼叫 OS 執行對應的 ISR,完成服務請求後,再將結果傳回給使用者程式。
:::success
### How does an interrupt differ from a trap?
中斷是由硬件設備或軟件信號產生的事件,陷阱(也稱為例外)是由錯誤或用戶定義的條件引起的軟件生成的事件。陷阱用於處理內部事件。
### Can traps be generated intentionally by a user program? If so, for what purpose?
使用者程式在執行時,若需要 OS 提供服務時,會藉由 System Call 來呼叫 OS 執行對應的 ISR(Interrupt Service Routine),完成服務請求後,再將結果傳回給使用者程式。
* EX: 向OS請求當前時間
:::
:::success
### Describe a mechanism for enforcing memory protection in order to prevent a program from modifying the memory associated with other programs.
:::
### Dual Mode
* User modeand kernel mode
* 硬體提供Mode bit
* 讓系統知道何時運行Kenel code 或 User code
* 有些instrution designed as privileged(特權) 只有 Kenel code 可運行。
* 越來越多的CPU支持多模式操作
* Ex: virtualmachinemanager(VMM) mode for guest VMs
### Timer
防止Inf loop / process hogging resources(進程佔用資源)
:::success
#### How does the distinction between kernel mode and user mode function as a rudimentary form of protection (security)?
透過限制OS在User mode 下對系統資源的的查詢及修改,避免program影響系統的安全及穩定性。
#### Which of the following instructions should be privileged?
* a. Set value of timer.
* b. Read the clock.
* c. Clear memory.
* d. Issue a trap instruction.
* e. Turn off interrupts.
* f. Modify entries in device-status table.
* g. Switch from user to kernel mode.
* h. Access I/O device.
a、d、e、f、g、h
:::
## Resource Management(資源管理)
### ProcessManagement
:::success
#### Discuss, with examples, how the problem of maintaining coherence of cached data manifests itself in the following processing environments:
* a. Single-processor systems
* b. Multiprocessor systems
* c. Distributed systems
:::
:::success
#### Some early computers protected the operating system by placing it in a memory partition that could not be modified by either the user job or the operating system itself. Describe two difficulties that you think could arise with such a scheme.
* 難以進行升級或修改:由於作業系統被置於保護分區中,因此需要進行升級或修改時,必須將整個保護分區解鎖或重置,這樣會導致系統中斷,因此這種方式使得作業系統升級或修改變得更加困難。
* 資源浪費:如果將作業系統置於獨立的保護分區中,則可能導致一些資源浪費。例如,如果作業系統本身需要較少的內存,而將其置於一個大的、不能被其他程序使用的內存分區中,這樣就會浪費內存資源,影響整個系統的效率。同時,由於該保護分區不能被使用者程序使用,因此也浪費了潛在的可用空間
:::
:::success
#### Describe some of the challenges of designing operating systems for mobile devices compared with designing operating systems for traditional PCs.
有限的硬件資源:手機等行動裝置的硬體資源有限,例如較小的儲存空間、較小的記憶體、低功耗的CPU等,這就要求操作系統必須在有限的資源下運作並滿足使用者需求,同時避免過度耗用電池或導致裝置運行緩慢。
多樣的裝置和平台:行動裝置操作系統需要適應各種不同的設備和平台,例如iOS、Android、Windows Phone等,每個平台的硬體、軟體和使用者介面都有所不同,因此設計出一個通用的操作系統變得更具挑戰性。
不同的使用場景:行動裝置通常在不同的使用場景中使用,例如在室內、室外、移動中等,而且使用者的操作方式也不同,例如使用手指操作、語音操作、手寫操作等,因此操作系統需要支持不同的輸入方式並且在各種使用情況下保持良好的響應性和可用性。
安全性和隱私:行動裝置儲存的個人資料、金融信息等隱私問題更加重要,因此操作系統必須實現嚴格的安全性和隱私保護措施,以確保使用者數據的安全和保密。
:::
:::success
#### What are some advantages of peer-to-peer systems over client–server systems?
分散性:P2P系統不依賴中央伺服器,而是將資源分散在多個節點上,使得系統更加穩定且不易受到單點故障的影響。同時,P2P系統可以更好地支持大規模的分佈式應用,例如文件共享、資訊檢索、協同工作等。
擴展性:P2P系統可以更加容易地擴展,因為可以通過添加更多的節點來增加系統的容量和彈性。同時,P2P系統可以更加適應不斷變化的網路環境,因為節點可以加入或離開系統而不會對整個系統造成太大的影響。
效率:P2P系統可以更加高效地利用節點之間的帶寬和存儲資源,因為資源可以直接在節點之間進行共享和交換,而不必通過中央伺服器。同時,P2P系統可以更好地適應節點之間的不對稱性,因為節點可以根據其資源情況和網路狀態來決定其參與的負載。
總體而言,P2P系統具有更好的分散性、擴展性和效率,使得它們在一些應用場景中比客戶端-服務器系統更具有優勢。
#### Describe some distributed applications that would be appropriate for a peer-to-peer system.
文件共享:P2P系統可以通過直接在節點之間進行共享,實現高效的文件共享,例如BitTorrent。
資訊檢索:P2P系統可以通過分佈式索引,實現高效的資訊檢索,例如Gnutella。
協同工作:P2P系統可以通過讓節點之間直接進行交互,實現高效的協同工作,例如Skype。
分散式計算:P2P系統可以通過將計算任務分配給不同的節點來實現高效的分散式計算,例如SETI@home。
:::
:::success
#### List five services provided by an operating system, and explain how each creates convenience for users. In which cases would it be impossible for user-level programs to provide these services? Explain your answer.
* interface
* program-excution
* file-system
* communicate
* error-detction
* I/O operation
:::
### Operating System Services
#### user:
1. User interface
* Graphics User Interface (GUI): window system with mouse and keyboard
* Touch-Screen Interface: used in mobile system
* Command-Line (CLI):text commands and keyboard
2. Program execution
3. I/O operations -A running program may require I/O, which may involve a file or an I/O device
* Provide means to do I/O
4. File-system manipulation
5. Communications
6. Errordetection
#### system:
1. Resource allocation
2. Logging
3. Protection and security
:::success
#### What is the purpose of system calls?Give six major categories of system calls.
1. process control(程式控制)
2. communicate(通訊)
3. device management(裝置管理)
4. file management(檔案管理)
5. information maintenance(資訊維護)
6. Protection (保護)
#### What system calls have to be executed by a command interpreter or shell in order to start a new process on a UNIX system?
fork() ->exec() ->exit()
fork() - 建立新進程
exec() - 載入新進程
exit() - 終止進程
:::
:::success
#### Why applications are operating-system specific? Give three reasons.
API - 不同的OS有著不同的APIs,導致程式無法在不同的OS上執行
CPU指令集 - 不同的指令集
System call - 不同的系統呼叫方式
:::
:::success
#### Briefly describe the relations of operating system goals, policies, and mechanism.
goal = user goal + system goal
user goal - 易於使用
system goal - 容易維護、安全、可靠、無錯誤
policies - 要做什麼
mechanism - 要怎麼做
:::
### OS struct
#### Monolithic(單核式)
將其中所有的系統功能都直接內建在核心(kernel)中,包括進程管理、檔案系統、裝置驅動程式、記憶體管理等等。單核作業系統的核心是一個大型的、複雜的程式,且所有的系統呼叫都直接執行在核心中。
優點:
* 效率高
缺點:
* 難以擴展、實作
:::success
#### Briefly describe monolithic operating system. Give one pro and con for such system.
pro - 效率高
con - 難以擴展
#### Briefly describe layered operating system. Give one pro and con for such system.
pro - 構造簡單
con - 性能下降
#### Give three characteristics for microkernel operating system.
擴展性高
可靠性高
效能負擔高
:::
:::success
#### How are iOS and Android similar? How are they different?
similar:
* structure相似
* 皆是for mobile
different:
* Android linux Base,iOS UNIX Base
#### Explain why Java programs running on Android systems do not use the standard Java API and virtual machine.
將Java Bytecode 轉譯成.dex (machine code)即可執行。
:::
:::success
#### Give the three tasks that a boot loader performs.
* 將OS kenel 載入到memory
* 初始化硬體
* 啟動OS
#### Why do some systems store the operating system in firmware, while others store it on disk?
* security - firmware可具唯讀性,無法任意修改,而存在disk則反之。
* 方便性 - 存在disk裡,更容易更新及修改。
* 效能 - 存在firmware裡,效能更好
#### How could a system be designed to allow a choice of operating systems from which to boot? What would the bootstrap program need to do?
A small piece of code known as the bootstrap OR boot loader located the kenel
將kenel 載入memory
初始化硬體
將檔案系統掛載
:::
:::success
#### Give two approach for monitoring an operating system, and briefly describe them.
* trace
* counter
:::
:::success
#### When a process creates a new process using the fork() operation, which of the following states is shared between the parent process and the child process?
shared memory segment
:::
:::success
#### Some computer systems provide multiple register sets. Describe what happens when a context switch occurs if the new context is alreadyloaded into one of the register sets. What happens if the new context is in memory rather than in a register set and all the register sets are in use?
只需切換到對應的register set,倘諾沒有空閒的register,則某個register set 的內容存到main memory 內,再把memory內當前要使用的內容移到register set。
:::
:::success
#### Describe the actions taken by a kernel to context-switch between processes.
1. process 收到 interrupt
2. 保存當前process status
3. 加載新的process
:::
:::success
#### Including the initial parent process, how many processes are created by the program shownbelow?
```C++
#include <stdio.h>
#include <unistd.h>
int main() {
for (int i= 0; i< 4; i++) fork();
return 0;
}
```
ANS:$2^4$
:::
:::success
#### Explain the circumstances under which the line of code marked printf("LINE J");will be reached
```C++
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main() {
pid_tpid;pid= fork();//fork a child process
if (pid< 0) { //error occurred
fprintf(stderr, “Fork Failed”);
return 1;
}
else if (pid== 0) { //child process
execlp(“/bin/ls”,“ls”,NULL);
printf(“LINE J”);
}
else { //parent process
wait(NULL);// parent waits for child
completeprintf("Child Complete");
}
}
```
如果執行execlp函數之前將printf("LINE J");移到execlp函數之前,則該行代碼將在子進程中執行並輸出 "LINE J"。
:::
## CH5 Cpu scheduling
### basic concept
goal - Maximum CPU utilization(最大化CPU利用率)
cpu/IO Burst cycle - a cycle of cpu excution and IO wait
cpu burst followed by IO burst
CPU burst distribution is of main concern (CPU burst的分布為主要問題)
### Cpu scheduler(Short-term scheduler)
在不同的state分配CPU給process。
運作頻率高 - Short-term
the records with process control block(PCBS) of process
CPU scheduling decision when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
1跟4必定選擇new process for execution
Nonpreemptive(cooperative)-只有1跟4需要scheduling,==不會有其他process搶佔CPU==
preemptive - ==process進行中會被迫交出CPU==,all modern OS use preemptive scheduling。
preemption problem:
* accessing shared data
* updating the share data。損毀資料
* in kernel mode
* changeing important kernel data。毀損kernel data
* in interrupts
* crucial OS activities。
#### Dispatcher v.s Scheduler
Dispatcher - 執行process排程
Scheduler - 規劃process排程
Dispatcher involve :
* switch context
* switching user mode
* jumping to the proper location restart program
Dispatcher lantency:

### Scheduling Criteria
* CPU utilization - keep CPU busy。
* goal-MAX
* Throughput - complete their execution per time unit (單位時間完成process的個數)
* goal-MAX
* Turnaround time (完成時間) - amount of time to execute a particular process
* goal-MIN
* Waiting time (等待時間) - mount of time a process has been waiting in the ready
* goal-MIN
* response time (反應時間) - amount of time it takes from when a request was submitted until ==the first response== is produced ==不一定要有實際計算,只需要有回饋==
* goal-MIN
### Scheduling Algorithms
#### FCFS(first come first service)
* The process that requests the CPU first is allocated the CPU first
* with a FIFO queue

Waiting time for P1= 0; P2= 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
* Convoy effect - short process behind long process
#### SJF(Shortest-Job-First Scheduling)
* Use these lengths to schedule the process with the shortest time

Average waiting time = (3 + 16 + 9 + 0) / 4 = 737
* SJF is optimal - gives minimum average waiting time
* The difficulty is knowing the length of the next CPU request
#### Determining Length of CPU burst
決定下一個 CPU 執行時間的長度
* Can only estimate the length - similar to the previous one(與前一次相似)
* 可以通過使用先前 CPU 執行時間的長度進行指數平均來完成
* Define:
$t_n$ = actual length of $n^{th}$ CPU burst
$𝜏_{n+1}$ = predicted value fot next CPU burst
α,$0\leqα\leq1$$$𝜏_{n+1} =𝛼t_n+(1−𝛼)𝜏_𝑛$$
* Commonly, α set to ½ and $𝜏_0$ set to a constant
#### Shortest-remaining-time-first schedule
* ==Preemptive SJF scheduling==


average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4=26/4=6.5
#### Round Robin (RR)Scheduling
* 每個進程獲得一小部分的 CPU 時間(time quantum q),通常為 10-100 毫秒。在此時間結束後,進程被抢占並添加到就緒隊列的末尾。
* Timer interrupt 每個量子以調度下一個進程
* implement FIFO queue
#### Time Quantum and Context Switch Time
如果就緒隊列中有 n 個進程,並且時間量子是 q,則每個進程一次最多獲得 1/n 的 CPU 時間,每次最多為 q 單位時間。沒有進程等待超過 (n-1)q 單位時間。
performance :
* q large -> 與FCFS相似
* q small -> 工作量負荷太大
Turnaround time 取決於 q,如果大部分cpu burst 小於q則效能提升,
* turnaround time 通常比SJF 更高,但有更好的response
* q通常為10ms到100ms,context switch時間小於10微秒。
#### Priority Scheduling
* A priority number (integer) is associated with each process(每個Process皆有優先級)
* 可分為Preemptive 跟 Nonpreemptive
* SJF is priorty scheduling where priority is the inverse of predicted next CPU burst time (預測時間越短優先度越高)
* Priorities defined either internally or externally
* internally by using measurable quantities (使用可衡量的值)
* Externally by using criteria OS (使用其他標準)
* Problem - Starvation (indefinite blocking) - low priority processes may never execute (優先度過低的Process永遠無法被執行)
* Solution - Aging - as time progresses increase the priority of the process (讓Process隨著時間提升優先級)
#### Multilevel Queue
* Ready queue is partitioned into separate queues (隊列被劃分為不同隊列)
* foreground (interactive)
* background (batch)

**Real-time->system->interactive->batch**
* Each queue has its own scheduling algorithm (每個queue有不同的algorithm)
* foreground - RR
* background - FCFS
* Scheduling must be done between the queues:
* Fixed priority scheduling
* Time slice - gets a certain amount of CPU time which it can schedule amongst its processes(越重要的分配越多CPU 資源)
#### Multilevel Feedback Queue
* A process can move between the various queues; aging can be implemented this way (process 可在不同的queue切換,透過aging實作)
* Multilevel-feedback-queue scheduler defined by the following parameters:
* number of queues
* scheduling algorithms for each queue
* method used to determine when to upgrade a process
* method used to determine when to demote a process
* method used to determine which queue a process will enter when that process needs service
### Thread Scheduling
* Distinction between user-level and kernel-level threads(可分為 user-level 跟 kernel-)
* When threads supported, threads scheduled, not processes(當thread支援,做的即是 Thread Scheduling)
* Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP (PCS 所)
* Kernel thread scheduled onto available CPU is system-contention scope (SCS
### algorithm evaluation
#### queueing model
大致分析CPU/IO burst 的平均時間
#### Multicore Processors
#### Load Balancing
* if SMP 讓所有
* Two approaches
* Pushmigration
* Pullmigration
#### Little formula
n = λ × W
#### simulation (模擬)
#### implementation (實作)
最為精確,cost(成本)最高,仍會遇到環境不同的問題。
## Ch6 Synchronization Tools
### background
cooperation process :
li
### Race condtion
當有多個Process 在世圖存取相同數據稱作 Race Condition
### critical section (臨界區段)
有一段會產生Race Condtion的程式片段稱作 Critical Section
#### Solution Critical Section
1. mutual exclusion(互斥性) :確保只有一個process修改common variables
2. progress:確保只有Process不會starveing
3. bounding waiting:限制其他Process進入Critical Section的次數
#### Peterson solution
#### Atomic varible
with compare_and_swap()
* provide atomic operation
* Ensure mutual exclusion
#### Mutex Locks
ensure mutual exclusion
* call to acquire() and release()
* implemented with atomic instructions
* require busy waiting(不斷確定直到available)
* Contended - decrease concurrency(降低併發性)
* Uncontended - it is available when a thread attempts to acquire it
* short duration - less than 2 context switchs
#### semaphore
* synchronization
* wait() and signal() atomic operation
wait(S){
while (S<=0)
; //busy wait
S-- ;
}
* Counting semaphore
* Binary semaphore - range btween 0 and 1, same as mutex lock
##### Semaphore Implementation
* two operations
* sleep - as wait() when S is nagtive push process in queue
* wakeup - pull process in waiting queue executes signal()
==Don't need busy waiting in Implementation==
* ensure mutual exclution for wait() and signal()
#### Monitors
* provides a convenient and effective mechanism for process synchronization
* only one process at a time
* condition variables
* choices
* Resuming Processes
* conditional-wait
x.wait(c) let c be a priority number
#### liveness
* ensure the process make process during theit execution life cycle
* liveness failure
infinite loop
busy wait
dead lock
priority inversion
* dead lock
two or more process are waiting indefinitely other waiting process
## CH7 Synchronization Examples
### classic Problems
### Bound-buffer Problems
* n buffer
* semaphore mutex initialized to 1
* semaphore full initialized to 0
* semaphore empty initialized to n


### Readers and Writers Problem
* The first readers–writers problem
* No reader be kept waiting unless a writerpermitted to access
* Writer may starve
* The second readers–writers problem
* No reader may start reading if a write is waiting to access.
* Reader may starve.
* reader-writer locks
* Need specifying lock(read or write)
* it is easy to identify read/write
### Dining-Philosophers Problem
* Several possible remedies to the deadlock problem
* Allow at most four philosophers sit at one time
* Allow a philosopher to pick up her chopsticks only if both chopsticks are available
* Asymmetric solution
* Odd-number philosopher first gets left chopstick, while even-number philosopher first gets right chopstick
### Alternative Approaches
## CH8 Deadlocks.
### System Model
* System consists of resources(系統由資源組成)
* Resource type 多種資源
* E.g CPU cycles , memory
#### utilizes a resouce
* Request(請求)
* Use (使用)
* Release (釋放)
#### system call
* Wait and signal
* Acquire and release
### dead lcok - every thread waiting other thread
### Deadlock Characterization (Dealock 的成因)
#### conditions
* **Mutual exclusion (互斥存取)** : only one process at a time can use a resource (一個資源for一個process)
* **Hold and Wait(持有及等待)** : a process holding least one resource and waiting another addtional resouce (能夠占用資源以及等待其他process 占用的資源)
* **No preemption(無搶占性)** : 不可存取其他process占用中的resource
* **Circular wait** :一組process 互相等待彼此的resource
#### Resource-Allocation Graph (RAG)
A set of vertices 𝑉 and a set of edges
* V is partitoned into two types:
* T = {...$T_n$} the set consist of thead in the system
* R = {...$R_n$} the set consist of resource in the system
* Request edge - directed edge $T_i$ -> $R_i$
* Assignment edge - directed edge $R_i$ -> $T_i$
* When will deadlock occur according to RAG?
* if graph contaion no cycles => no deadlock
* If graph contains a cycle =>
* one instance per resource type => deadlock
* if several instances per resource type => may deadlock
### Handling Deadlock
#### ignore the problem
Used by most OS, includeing Linux and Windows
==WHY==
* Deadlock occurs infrequently
* Expensive to fix it
#### Ensure that system never enter a Deadlock
* Deadlock prevention
* Deadlock avoidance
*
#### Allow system enter a dead lock
* Deadlock dection
* Deadlock recovery
### Deadlcok Prevention
* ~~Mutual Exclusion~~
* Hold and Wait
* Allow thread to request resources only when the thread has none allocated (只有process未持有資源時才能請求資源)
* Low resource utilization; starvation possible
* No Preemption
* If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released (如果持有某些資源的process請求另一個不能立即分配給它的資源,則釋放當前持有的所有資源)
* Preempted resources are added to the list of resources for which the process is waiting(被搶占的資源被添加到process正在等待的資源列表中)
* Circular Wait
### Deadlock Avoidance
* Simplest and most useful model requires that each threaddeclare the maximum numberof resources of each type that it may need(在請求資源前,先聲明需求最大值)
*
#### Safe State
* That is:
*
## CH9 Main Memory
### Background
#### Basic Hardware
#### Address binding
* three stage
* Compile time
* Load time
* Execution time
### Memory-Management Unit (MMU)
* Hardware device that at run time ==maps virtual to physical address== (將virtual address map to physical address)
*
### Dynamic Loading
* Routine is not loaded until it is called
* Better memory-space utilization; unused routine is never loaded
#### Dynamic linking
* static linking - system libraries and programcode combined by the loader into the binary program image(將libraries 跟 program code 打包,在compile時就完成了)
* Dynamic linking –linking postponed until execution time(libraries在run time 時才載入libraries)
### Contiguous Memory Allocation (連續的記憶體配置)
* support both OS and user processes
* separte into two partitions:
* User process held in high memory
* OS usually held in low memory
#### memory Protection
### Memory Allocation
* Variable-partition partitionsizes for efficiency (sized to a given process’ needs) (大小根據process大小變更)
* Hole–block of available memory; holes of various size are scattered throughout memory (記憶體之間的空隙)
### fit
* First-fit: Allocate the firsthole that is big enough(第一個找到大小足夠的Hole)
* Best-fit: Allocate the smallesthole that is big enough; must search entire list, unless ordered by size (大小最接近的Hole)
* Worst-fit: Allocate the largesthole; must also search entire list (找最大的Hole)
#### Fragmentation (碎片化)
* External Fragmentation–total memory space exists to satisfy a request, but it is not contiguous (剩餘的空間足夠但沒有連續的一段能夠儲存下一個process)
* Reduce external fragmentation by compaction
* Shuffle memory contents to place all free memory together in one large block
* Compaction is possible onlyif relocation is dynamic, and is done at execution time
* Internal Fragmentation–allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used(分給process的memory未被完全使用)
### Paging
* Physical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available
* avoids external fragmentation
* Avoids problem of varying sized memory chunks (避免記憶體區塊大小不同)
* Divide physical memory into fixed-sized blocks called ==frames==
* Divide logical memory into blocks of same size called ==pages==
* To run a program of size Npages, need to find Nfree frames and load program (N個pages,即有N個frames)
* Set up a pagetableto translate logical to physical addresses(使用page table來轉換logical address to physical addresses)
* Address generated is divided into:
* Page number(p) - index into a page table,contains base address
* Page offset(d) - combined with base address to define the physical memory
* Calculating internal fragmentation
* Page size = 2048 bytes
* Process size = 72,766 bytes
* 35 pages + 1,086 bytes
* Internal fragmentation of 2,048 -1,086 = 962 bytes
* Paging fragmentation
* Worst case = 1 frame - 1 byte
* average case = 1/2 frame size
*
### Structure of the Page Table
## ch10 Virtual Memory
### background
#### virtural memory
#### virtual address space
logical view of how pr
* start at adress 0
### Demand Paging
Less I/O needed
Less memory needed
Faster response
More users
#### Basic Concept
#### Page Fault
1. OS looks another table
2. find free frame
3. swap page
4. reset tables
5. restart the instruction
#### Aspects of Demand Paging
* Extreme case -
* first instruction of process non memory-resident -> page fault (第一次的操作必定產生page fault 因為non memory-resident)
*
#### performance of Demand Paging
* Three major activities
* service the interrupt (處理interrupt)
* read the page (存取page) - lots
* restart the process (重啟process)
### copy and Write
* allow
### Page Replacement
* Use modify(dirty) bit to reduce overhead of page transfers
### Page Algorithm
#### FIFO
#### Optimal
#### Least Recently Used (LRU)
* Generally good algorithm and frequently used
##### counter implementation
* Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter
* When a page needs to be changed, look at the counters to find smallest value
##### Approximation
* Reference bit
* initially = 0
* When page is referenced bit set to 1
* Replace any with reference bit = 0
* Second-chance algorithm
#### Counting
* Lease Frequently Used:replace page with smallest count
* Most Frequently Used
#### Page-Buffering
* Keep a pool of free frames (buffer)
* Read page into free frame and select victim to evict and add to free pool
* Possibly, keep list of modified pages
### Minimum Number of Frames
* We must also allocate at least a minimum number of frames.
*
### Thrashing
* If a process does not have “enough” pages, the page-faultrate is very high
* Page fault to get page
* Replace existing frame
* leads to
* Low CPU utilization
* OS it needs to increase the degree of multiprograming
* another process added to the system
* == Thrashing - a process is busy swapping pages in and out==
*
### Working-Set Model
### Other Issues
#### Program Structure
#### - I/O interlock
* I/O Interlock - Pages must sometimes be locked into memory
* Pinning of pages to lock into memory