---
tags: 大三
---
# 作業系統期中筆記 II - 國立中興大學大資訊工程系 Autumn 2020 CH4, 5
[]() [](https://hackmd.io/jZfXixhrTmWPZzYY5IH_fQ)
[作業系統期中筆記整理 - 國立中興大學大資訊工程系 Autumn 2020 (Part I)](https://hackmd.io/19Osz5jKSQqIHk6Tl3OP2g)
## Overview
| 英文 | 台灣 | 中國 |
| ------- | ------ | ---- |
| process | 程序, 行程 | 進程 |
| thread | 執行緒 | 線程 |
## CH4 Thread & Concurrency
### Benefits of multi-thread
1. Responsiveness
+ 如果有一個執行緒卡住,其他執行緒仍能執行
2. Resource sharing
+ 不像多行程, 多執行緒的 code 區和 data 區是共有的
3. Economy
+ context-switch 是多行程的5倍快,創建一個執行緒是創建一個行程的30倍快
4. Scalability
+ Multithreading on a multi-CPU machine increases concurrency
+ Each thread may be running in parallel
*Q: What are benefits of threads?*
*A: Responsiveness, Resource Sharing, Economy, Scalability*
*Q: What sections and resources are shared between threads belonging to the
same process?*
*A: code section, data section, other OS resources (ex: open file & signals)*
### Concurrency(並行) vs Parallelism(平行)
On a single-processor systems, processes were running concurrently, but not in parallel

### Types of Parallelism
> Data Parallelism and Task Parallelism

### Types of Thread
1. User thread: provided at the user level by **user-level library**
> Drawback:
> 1. Blocking problem
Entire process will block if a thread makes a blocking system call
> 2. No support for running in parallel on MP
Since the kernel only sees a single thread
(一間公寓兩個人住房東不知道,因為房東不知道。但因為是私底下協調,所以創建一個新執行緒比 kernel thread 省時)
>
2. Kernel thread: provided at the kernel (OS)
>
> 房東知道,所以資源有效分配但是因為要跟房東協調所以創一個新執行緒比較花時間)
>
> Creating a user thread requires creating the corresponding kernel thread
> 1. Overhead
> 2. To bound the overhead, OS may restrict the number of threads supported
### Multithreading Models
+ Many-to-One Model
> Many user-level threads mapped to single kernel thread
+ One-to-One Model
> Each user-level thread maps to a kernel thread
+ Many-to-Many Model
> Many-to-many = many-to-one + one-to-many
> Allows M user-level thread to be mapped to N kernel threads, where N $\le$ M
>
> Problems of previous two models
> + Many-to-one: true concurrency is not gained
> + One-to-one: cannot create too many threads
*Q: What is the blocking problem in many-to-one thread model?*
*A: Entire process (All threads) will block if a thread makes a blocking system call.*
*Q: Why the many-to-one thread model suffers from the blocking problem?*
*A: Kernel only sees a single thread.*
### Strategies for Creating Threads
1. Asynchronous threading
> After creating children, parent resumes its execution
> Parent and child run concurrently
> e.g., a multithreaded server
2. Synchronous threading
> After creating children, parent wait for all children to terminate
> called fork-join strategy
> e.g., array summation (各個執行緒算一部分,算完後由主執行緒彙整)
### Thread pools (執行緒池)
常用於網站伺服器, thread pool work well when the tasks submitted to the pool can be executed asynchronously.
1. creating a number of threads in a pool to wait for working
2. receiving a request → awaken a thread from the pool → pass it the request to service
3. once the thread completes its service → return to the pool
4. if no available thread → server waits until one become free
pros:
1. Faster to service a request with an existing thread than create a new thread
2. allows the number of threads in the apps to be bound to the size of the pool
3. <u>Separating task to be performed from mechanics of creating tasks</u> allows different strategies for running task
### Threading issue
semantics of `fork()` and `exec()`
1. `fork()` in multi-thread app?
+ duplicate all threads
+ duplicate only the thread that invoked fork()
> 究竟是哪一種呢? Some UNIX system supports tow versions of fork()
2. `exec()` in multi-thread app?
> 這個沒有爭議: If a thread invoked the`exec()` system call, it will replace the entire process, i.e., including all threads
> Because threads share the `.code` section
### Signal handling
signal are used in UNIX systems to notify a process that a particular event has occurred
2 types of signal
1. synchronous signals
> Delivered to the same process that causes the signal.
> e.g., illegal memory access, divided by zero, ... etc
2. asynchronous signals
> Generated by external events
> e.g., `Ctrl-C` to terminate a process, timer expires, ... etc
1. signal is generated by a particular event
2. signal is delivered to a process
3. signal is handled by the signal handler(s). Default signal handler in the kernel and user-defined signal handler
### Thread cancellation
terminate a thread before is has completed
2 approaches for thread cancellation
1. asynchronous cancellation
+ terminates the target immediately
+ 有可能會出事,因為 resource reclaim
+ the target thread is holding resources
+ the target thread is in the midst of updating a shared data
2. deferred cancellation
+ the target thread periodically check if it should terminate
+ allow the target thread to terminate itself in a safe pointer, called **cancellation point** (OS lab 有教過).
## CH5
### Basic Concepts
#### I/O Burst Cycle
+ CPU-I/O Burst Cycle
+ 一程式會有要用CPU和I/O的地方,兩者會戶相循環
+ 通常CPU burst長的數量少,反之數量則多
+ I/O bound
+ I/O做比較多,計算少
+ 如ftp、文字編輯等
+ CPU bound
+ 計算較多
```c=
a = a + 1;//cpu burst start
b = b - 1;
c = c * 5;//cpu burst end
fread();//io burst
x = x + 1;//cpu burst start
y = b * 3;//cpu burst end
fwrite();//io burst
```
#### Process Scheduling
+ ready queue -> running
+ multiprogramming
+ following objectives
+ multiprograming
+ timesharing
+ and balance two objectives
+ Preemptive and Nonpreemptive
<img src='https://i.imgur.com/CcDDxnp.png' style='width:80%'>
+ CPU scheduling when:
1. running -> waiting:
+ I/O or `wait()`
2. running -> ready:
+ when quantum expires
3. waiting -> ready
+ completion I/O
4. Terminates
+ Nonpreemptive
+ 在 1. and 4. 會發生
+ 在做完後才會把CPU資源釋出
+ 對 time-sharing system 和 real-time 很不好
+ Preemptive
+ 1~4都會發生
+ 強迫切斷process使用CPU
+ Dispatcher
+ 把CPU core給schedular選出來的process
+ 過程:
+ Switching context from A to B
+ Switching to user mode
+ 跳回原本的user program的位置,以繼續執行
+ Dispatch latency
+ 原本的process停下來到新的開始執行的時間
+ 越快越好
### Scheduling Criteria
+ CPU utilization
+ 讓CPU越忙越好
+ Throughput
+ 每單位時間完成的process,越多越好
+ Turnaround time
+ process從開始到結束的時間,越少越好
+ 互動式程式(interactive process)比較不在意Turnaround time
+ Waiting time
+ 一process在 ready queue裡的總時間,越小越好
+ Response time
+ 從送出請求到回應的時間,越小越好
+ 專指interactive process
### Scheduling Algorithms
#### FCFS(First-Come,First-Served)
+ Ready queue is FIFO queue
+ 先來的先跑
+ Nonpreemptive
+ Ex:
| Process | CPU burst |
| -------- | -------- |
| P1 | 24 |
| P2 | 3 |
| P3 | 3 |
假設順序為 P1,P2,P3
```c=
+--------------------------------+----------+---------+
| P1 | P2 | P3 |
+-----------------------------------------------------+
+ + + +
0 24 27 30
Avg waiting time = (0+24+27)/3 = 17
```
如果換個順序:P2,P3,P1
```c=
+----------+---------+----------------------------+
| P2 | P3 | P1 |
+-------------------------------------------------+
+ + + +
0 3 6 30
Avg waiting time = (6+0+3)/3 = 3
```
> 護衛效應:短的process等長的等很久
> 那如果短的先開始呢?
#### SJF(Shortest-Job-First)
+ 有preemptive和nonpreemptive兩個版本
**已知**
| Process |Arrival Time | Burst time |
| -------- | -------- | -------- |
| P1 | 0.0 | 7 |
| P2 | 2.0 | 4 |
| P3 | 4.0 | 1 |
| P4 | 5.0 | 4 |
**Nonpreemptive:**
```
+-------------------+-------+-----------+------------+
| P1 | P3 | P2 | P4 |
+-------------------+-------+-----------+------------+
0 7 8 12 16
Avg waiting = ((0-0)+(8-2)+(7-4)+(12-5))/4 = 4
要扣除自己的進入時間
```
**Preemptive:**
```
+-------+-------+---+--------+----------+-----------+
| P1 | P2 | P3| P2 | P4 | P1 |
+-------+-------+---+--------+----------+-----------+
0 2 4 5 7 11 16
Avg waiting = ((0+11)+(0+1)+(0)+(7-5))/4 = 3
```
+ 顯然 SJF 是最佳解了,但...
+ Burst time是無法準確計算的,只能用估計
+ 希望用以前的紀錄推未來,但又希望離現在越近的權重越重
$t_n$ 第n次的CPU burst長度
$\tau_{n-1}$ 下次預測長度
$\alpha$ 介於0和1之間
Define:
$\tau_{n+1} = \alpha t_n + (1-\alpha)\tau_n$
Exponential Avg:
$\tau_{n+1} = \alpha t_n + (1-\alpha)\tau_n$
$= \alpha t_n + (1-\alpha)(\alpha t_{n-1}+(1-\alpha)\tau_{n-1})$
...
$= \alpha t_n + (1-\alpha)\alpha t_{n-1} + ... + (1-\alpha)^j\alpha t_{n-j} + ... + (1-\alpha)^{n+1}\tau_0$
#### RR(Round-Robin)
+ 在 process 運行一段時間後切斷換成另一個 process
+ 通常 Time quantum = 10~100ms
+ 被切斷的process重新回到 ready queue 的尾端
+ process僅在以下時間離開 running state:
1. Time quantum 到了
2. 等待I/O或 interrupt
+ preemptive
+ 效能分析:在有 n 個 process 和 time quantum 為q的情形下
+ 每個process得到$\frac{1}{n}$的CPU時間
+ 不會等超過(n-1)q
+ q值:80% of CPU burst time < q
#### Priority Scheduling
+ 對每個 process 設定優先度
+ Preemptive or Nonpreemptive
+ Problem: starvation or indefinite blocking
+ 有些權重太低的程式,永遠沒有時機可以執行
+ Solution: aging
+ 等越久的 process 有越高的優先級
#### Multilevel Queue
+ Priority 延伸版
+ 建立多個不同優先級的 queue ,同優先級的用RR來規劃
+ O(1)的複雜度
+ 也可以分成前景後景用不同的 level Queue 和不同的 scheduling algorithm
+ process 固定在同一個 queue 裡
+ 一樣會有 starvation 的問題
+ 時間切分:80% foreground
+ 20% background

#### Multilevel Feedback-Queue
+ process 可以在多個 queues 之間移動
+ 佔用CPU太久的程序可移到優先級較低的 queue
+ 等待時間太久的程序可移到優先級較高的 queue
+ aging 機制
+ ex: 3 queues
+ $Q_0$ RR,q = 8ms
+ $Q_1$ RR,q = 16ms
+ $Q_2$ FCFS
+ 小結:
+ 最一般化
+ 最複雜
### Thread Scheduling
+ Process-contention scope(PCS)
+ Threads library 管理 user thread ,對應到 kernel thread
+ 同 process 的 thread 互相競爭
+ 在 M:1 或 M:M 的模型
+ System-contention scope(SCP)
+ Kernel 決定哪個 kernel thread 可以用 CPU core
+ 跟所有 kernel thread 競爭
+ 1:1 only use SCS
### Multiple-Processor Scheduling
+ Asymmetric multiprocessing
+ 只有主處理器在做排程演算法
+ 優點是簡單,缺點是主處理器有瓶頸
+ Symmetric multiprocessing (SMP)
+ 目前比較主流的實作方法 (Standard approach, adopted by Windows, Linux, Mac OS)
+ 每個處理器自己決定自己的排程 (2種實作法)
+ A common ready queue
+ Each processor has its private ready queue (主流)

Common ready queue 有一個致命傷,就是在存取 common queue 是要 lock,這會導致效能瓶頸;而另一種每個核心各有自己佇列的方法就沒有這種問題,但它也衍伸出另一種問題,就像你在大買家排隊一樣,有的隊伍長有的隊舞短,因此需要透過負載平衡(load balancing)去改善
### Multicore processors(homogeneous)
multiple processor cores on the same chip
problem: memory stall (程式執行時 cpu 會去 memory 取值,這時 cpu idle,計組有教過)

解決方式就是透過硬體去實作當 CPU 去取值沒事做時,同時可以執行別的執行緒,比如有些電腦有8核16緒,耀中實驗室那台有16核32緒
#### Load Balancing
2 approaches
1. push migration, 把 task 放推進比較沒事做的 core
2. pull migration, 把 task 從比較忙的 core 移出
#### Processor Affinity
##### Soft affinity
##### Hard affinity
### Heterogeneous Multiprocessing (HMP)

+ Some systems are designed using cores that run the same instructions, yet vary their clock speed and power management
+ Especially for mobile devices
<style>
#author{
border: 1px solid gray;
border-radius: 10px;
padding: 10px;
width: 80%;
margin: 10px auto;
}
</style>
<div id="author">
<p>111級中興資工</p>
<p>作者:<a href="https://github.com/liao2000">廖柏丞</a>、<a title="瑋哥" href="https://github.com/wei-coding">游庭瑋</a>
</div>