# 作業系統
## Note
### Ch4 thread
- Concurrency v.s. Parallelism
- Concurrency:1個processor/core,但scheduler來回切換jobs來同時做事
- Parallelism:一次做很多task
### Ch5 CPU Scheduling
#### Thread Scheduling
- **LWP**:lightweight process,many-to-one 跟 many-to-many 的 user-level thread 用這跑,幹所以這到底是三小啦崊良勒
- **PSC**:++process-contention scope++,scheduling competition發生在一個process裡面。e.g. user-level thread
- **SCS**:++system-contention scope++,competition among all threads in system。e.g. kernel-level thread
#### Multiple-Processor Scheduling
- **Symmetric multiprocessing**:SMP,each processor is self scheduling
- **NUMA**:non-uniform Many Access,是一種多處理器下的記憶體架構,CPU去access不同記憶體的速度不一樣
:::spoiler 圖示

:::
- **Process affinity**:process傾向在同一個processor裡跑,這樣才能重複用cache
- ++*Soft affinity*++:OS盡量讓thread在同一個processor上跑
- ++*Hard affinity*++:process可以指定它要在哪些processor上跑
- **load balancing**:SMP下,要讓各CPUs平均分配工作比較有效率
- ++*Push migration*++:loading大的推工作給loading小的
- ++*Pull migration*++:loading小的從loading大的那邊拉工作過來
- 大部分processor都loading高的話用pull,反之用push,總之叫少數人做事
- Multi-core Processor:一個processor上有多個core。(logical processor)
- **Multi-threaded multi-core systems**:給每個core一個以上的threads,趁一條在memory stall的時候執行別條,(Intel 叫它 hyper threading)
- level 1:決定每條software thread要在哪個hardware thread上跑
- level 2:每個core怎麼要跑什麼hardware thread
#### Real-Time Scheduling
- Real-time不代表要快,只是有deadline
- $T_1(t, d, p)$
- time $t$:執行時間,之前講的burst time
- deadline $d$:就deadline
- period $p$:每經過$p$,就會來一個這個task
- 以下討論soft(盡量meet deadline但不保證)的real-time algorithm,hard(有保證)太難,bye
- **Rate Monotonic(RM)**:
只看period,$p$愈短,priority愈高,決定後,priority是固定的。
之後同preemtive的priorty scheduling。
$e.g.$ 先寫一周一次的演算法作業,再做兩周一次的程式語言作業。
- **Early Deadline First(EDF)**:
只看deadline,deadline早的,priority高,先做。因為要看當下狀況,priority是動態的。
$e.g.$ 先做死線近的作業。
#### OS 範例
#### Linux
- priority 0~140,數字愈**低**,優先度愈高
- 靠杯為甚麼好像是priority愈高的,time quantum愈多?跟其它兩個不一樣
- 分成
- Real-time(0~99)
- static priorities
- other(100~140)
- dynamic priorities base on task interactivity
- 跟其它兩種OS不一樣,**Linux 2.5版**把text分成active跟expired兩array,如果task還沒用完給定的$q$,那可以一直待在active裡,直到它把給定的$q$用完; 反之,若第一次就把$q$用完,而且task還沒做完,那就丟到expired裡。等active裡tasks的$q$都用完了,就把兩個array的pointer互換,expired變active。但這個的互動性不好。(清大教的)
- 但到了**Linux 2.6.23**變成Completely Fair Scheduler(CFS),
沒有active跟expired之分,對所有task都平等對待,每個task的time quantum不再是由priority所決定的固定數值(還是跟$p$有關啦,但不是固定的),target latency(不知怎麼算出的,所有task至少執行一次所需的時間,default好像是20ms?)除以總task數$N$,再根據nice value(-20~19,值愈大表示它愈nice,讓別的task先做,也就是說priority愈低,nice+20就是global priority)來加權計算出每個task的time quantum(nice↑: 優先度↓, quantum↑),也就是說,==一個task的time quantum = weighted 1/$N$ target latency==
現在有了RR用的time quantum,那要怎麼搞出preemptive的Priority呢?
每個task都有個virtual runtime存在vruntime裡,裡面放的是task已在CPU裡執行的時間,但是(!)會根據nice value來加權,nice高的(priority低的)會虛報的比實際執行時間高一點,讓其他不nice、優先度高的task先執行,normal priority的task(nice = 0)是個分水嶺,只有它會實報真實的已執行時間,nice>0會報高;nice<0會報低,而scheduler會把vruntime最少的拿出來執行。實作上是把所有task的vruntime都存在一個紅黑樹(類似二分搜樹)裡,每次都拿最左邊的leaf
- 應該是吧,幹PPT寫的雞巴爛。[資料](https://opensource.com/article/19/2/fair-scheduling-linux)
- 幹崊涼**6.6版**又換成什麼EEVDF
#### Windows XP
- priority 0~31,數字愈**高**,優先度愈高
- priority 0 被留給 memory-management thread
- queue for each priority
- :::spoiler 分成6個priority class,各class又切成7種程度,I/O多的priority高(從wait回來),CPU多的priority低(做了一個quantum還沒做完),會在class裡跳動,但不會低於normal(但清大說會在7個裡面跳)。除了Real-time的會固定,不會動(static)

:::
##### Solaris
- priority 0~169,數字愈**高**,優先度愈高,同priority用RR
- :::spoiler 分成6個priority class,schduler會把它轉成global的priority來排程
- Real time(RT)
- System(SYS)
- Time sharing(TS)(default)
- Interactive(IA)
- Fair share(FSS)
- Fixed priority(FP)

:::
#### Algorithm Evaluation
- 先決定criteria,定義怎樣算是好的演算法,舉例:
- 在response time不高於300ms的條件下,最大化CPU使用率
- 最大化throughput,同時平均turnaround time與總執行時間呈線性比例關係(蛤?)
- 評估CPU-scheduling algorithm 有四種方式
1. **Deterministic modeling**
- 分析,就跟我們寫題目畫甘特圖一樣,直接代一個workload進去各演算法看看。
(不知道跟Simulation差在哪,一個用人腦、一個用電腦跑?)
2. **Queueing model**
- 評估、估算給定系統下,各種task的分布情形(通常呈指數分布,並以平均值來描述),再代入排隊理論的模型來套公式算平均的throughput、waitng time...那些
- Little's formula:$n = λ \times W$
- 前提:steady state,queue長度保持動態平衡
- $n$:queue 長度
- $λ$:arriving rate,倒數就變多久來一個task
- $W$:wait time,在queue裡等多久才會被服務
- 可用來估計waiting time
- 可以這樣想,當steady的時候,
$\Rightarrow$ $λ$ = service rate
$\Rightarrow$ $\frac{1}{λ}$ = 處理1個task要多久,
$\Rightarrow$ $W$ = $n \times \frac{1}{λ}$
3. **Simulation**
- 用電腦模擬跑一遍Data(一堆task,應該),因為是模擬,不用真的執行,用一個clock變數模擬時間流動就好,省時間
- Data可以是
1. 隨機產生
2. 由數學或經驗上的模型產生
3. 系統實際產生(trace file)
4. **Implementation**
- 就真的搞出scheduler來跑
### Ch6 Synchronization Tools
- 因為process是concurrent地跑,可能隨時被中斷,所以要確保shared data的一致性,++避免race condition++
- 那就搞出幫each process一個critical section,當一個process在執行它的critical section時,其他人都不准跑它的critical section
- 實現critical section時,同時需有三個特性:
1. **Mutual Exclusion**:
++互斥++,只能最多有一個process在跑它的critical section
2. **Progress**:
1. PPT:當沒人在critical section,且有人想進critical section時,要在有限時間內決定下一個是誰要進critical section
2. 課本:當沒人在critical section,且有人想進critical section時,只有沒在執行remainder section的有資格進去,(不能才剛出來又想進去的意思?),且要在有限時間內決定下一個是誰要進
(++不要空著沒人做++)
3. **Bounded Waiting**:
當有人要求進它的critical section後,其他人進critical section的次數要有上限,才不會有可憐蟲進不了
(++不可以讓人家一直做++)
- 介紹
- Critical Section Solution:
- Software Solution
- Synchronization Hardware
- Synchronization Tools:
- Semaphore
- Monitor
#### Software Solution
都先只考慮兩個process的情形:
1. 錯誤例子 Solution
- turn:現在誰進critical section
```cpp=
while (true) {
// turn = i; // 搶過來自己用,幹網路上都沒有這行,清大也沒有,就只有老師那份PPT有,到底沙小,先當沒有好了。
// 喔感覺好像是,加了就不合bounded waiting,不加又不合progress,總之都錯
while (turn == j); // turn是別人就卡著
/* critical section */
turn = j;
/* remainder section */
}
```
不符合progress,因為可能有個process想進,但另一個process一直沒有要進,所以turn都輪不到它。
2. Peterson's Solution
- turn:現在誰進critical section
- flag[i]:想進critical section
```cpp=
while (true) {
flag[i] = true; // 順序不能變
turn = j; // 禮讓別人,進了一次就先給別人進,符合bounded waiting,如果是turn = i的話就不合,因為可能會被其中一個一直搶來用
while (flag[j] && turn == j); // 別人想進且確實在進就卡著,如果上一行禮讓給別人,但人家沒想進,那就自己進;或是人家讓給自己,那也自己進
/* critical section */
flag[i] = false;;
/* remainder section */
}
```
雖然是對的,但現代電腦的架構為了提升效能,processor、compiler可能會reorder不相依的read、write操作(為毛平常寫code沒這問題??),比如說:
```cpp
flag = false;
x = 0;
/* Thread1 */
while(!flag);
print(x);
/*Thread2*/
x = 100;
flag = true;
```
本來預期 `x = 100`後再執行`flag = true`,並印出`100`,
但因為`flag`跟`x`之間沒有data dependency,所以thread2的兩行順序可能對調,導致印出`0`!
考慮reorder變成:
```cpp=
turn = j;
flag[i] = true;
```

++本來++:先誠實說自己想要,互相禮讓後,被禮讓後就老實進去並擋著別人
e.g. 你說你想要搭火車,我讓位給你,看你要搭我就不去搭了
++reorder++:假裝自己不想要,讓給別人,別人讓給你後,看你不想要就先進了,結果你說你又說想要然後就進了
e.g. 我讓位給你,看你沒要搭,我就去搭了,結果我搭了你又想來搭了,而且還能搭
#### Hardware support for Synchronization
介紹一些硬體支援的instruction來避免剛剛順序對調的問題
##### Memery Barriers
- memory model:不同電腦架構提供的兩種常見保證(不知道介紹這個幹嘛)
- Stronly order:一旦memory被某個processor更動後,其他processors會++立即++知道
- Weakly order:其他processors不一定會++立即++知道
- memory barriers:電腦架構(??)提供的一個instruction,強制要求在這之前的load、store都會比在這之後的load、store都先做完。比如上面那問題可以這樣解決:
```cpp
x = 100;
memory_barrier();
flag = true;
```
##### Hardware Instruction
許多現代電腦架構都有提供hardware instrction讓我們atomically(不會被interrupt)地test and modify或是swap東東
- **test_and_set()**
- Definition
```cpp
bool test_and_set(bool *target)
{
bool rv = *target;
*target = true;
return rv;
}
```
- Mutual-exclusion with tes_and_set():
```cpp
do{
while(test_and_set(&lock)); // 卡著直到false,如果有人test是false的話就會把它設true,達到Mutual exclusion
/* critical section */
lock = false;
/* remainder section */
} while(true)
```
(應該是沒符合Progress跟bounded waiting)
- **comapre_and_swap()**
- Definition
```cpp
bool comapre_and_swap(int *val, int expected, int new_val)
{
int tmp = *val;
if(*val == expected)
*value = new_val;
return tmp;
}
```
- Mutual-exclusion with comapre_and_swap():
```cpp
do{
while(comapre_and_swap(&lock, 0, 1)); // 卡著直到false,如果有人test是false的話就會把它設true,達到Mutual exclusion
/* critical section */
lock = 0;
/* remainder section */
}
```
#### blah blah blah
- **priority inversion**
- 假設現在有三個process的priority長這樣:$L<M<H$
- $H$向$L$要資源並等它
- 但$M$會卡$L$ => $M$間接卡$H$,反了
- **priority-inheritance protocal**:持有高priority process($H$)需要的資源的process($L$)會暫時繼承它的high priority
### Ch7 Synchronization Example
### Ch8 Deadlock
#### Deadlock 四要素
- Mutual exclusion:資源同時只能被一個process持有
- Hold and wait:在持有資源的同時也在等其他資源
- No preemption:只能等process做完工作,自己自願釋出資源
- Circular wait
#### 避免Deadlock
- **Deadlock Prevention**:預防,避免四要素
- Mutual exclusion:X
- hold and wait:
- ~~hold~~:request前要釋放所有資源
- ~~wait~~:在開始跑之前就先request所有資源
- no preemption:
- 自己wait就先釋放
- request時發現被別人卡著在wait,就把它搶過來用
- circular wait:把resource type編號,規定只能嚴格遞增的拿(如果要拿多個同樣type的資源就要一次拿),如果要拿編號較小的較要先釋放編號大的
- **Deadlock Avoidance**:遇到了就避免,request會危害到safe state就不要allowcate資源給它
- single instance:判斷"如果"allocate的話會不會有circle
- multiple instances
- safe sequence
- $Max$:process可能發起的最多request數
- $Allocation$:process現在持有的資源數
- $Need$:$Max-Allocation=$ 可能發起的requests數(worst case)
- 一個sequence(順序沒差),其中每個process的$Need$都可以用 avaliable + $\sum$ 先前process持有的resourse($Alowcation$)滿足就是safe
- **Banker's Algorithm**:測試"如果"allocate resource給request的話,worst case下(所有process都需要$Max$個資源)找不找的到safe sequence,可以才會真的allocate
- safe algorithm:看safe不safe(這邊unsafe++不代表一定deadlock++,因為是考慮worst case)
1. 假設所有process都需要 $Max_i$ resource
2. 找到一個可以用free resource滿足的process
3. 釋放那process的資源
4. 重複2.直到所有process都滿足
- $O(mn^2)$:m種資源,n個process
#### Deadlock detection
- single instance:把resource-allocation graph的resouce省略掉(wait-for圖),看有沒有cycle
- multiple instances:跑safe algorithm,但不用管$Max$那些worst case,只需要看系統現在的狀態($Allowcation$ 跟 $Request$)。。如果找不到safe sequence就是++真的deadlock了++
#### Deadlock Recovery
- **Process termination**:砍(abort)process
- kill all process
- 一個一個kill直到沒deadlock
- **Resource Preemption**:把它資源搶走
1. 選一個可憐蟲(victim)把資源搶過來給別人
2. 把可憐蟲++rollback++到某個safe state(或是最簡單的total rollback重新來)然後restart它
3. 阿要確定可憐蟲不會++starvation++餓死
### Ch9 Memory Managemant
- CPU只能直接access register跟memory
- **Adddress Binding**
- **compile time**:編譯的時候就決定這支程式會在記憶體中會用的physical address
- **load time**:用相對位址,被load到memory裡的時候才bind => relocatable,can be run at any address,但不能運行到一半去搬移,不然整支程式都要reload重跑
- **runtime**:用virtual address,隨便你搬,反正page table會告訴我要去哪找(需要MMU,Memory Management Unit)
- dynamic loading:programmer做,function要call到再load進來
- dynamic linking:OS做,不同process link 到同一份lib
- Swaper跟Pager不一樣
- Swaper(midterm scheduler):把"整個"process移出/進memory
- Pager:把process的某些page移出/進memory,process可繼續執行(virtual mamory)
### Ch10 Virtual Memory
- 我們不想把Process整個load到memory,因為有
- 很多正常來說用不到的Error handling code
- 很少用到的function
- shared lib
- 沒用到的Araay、Table...
- Hardware support:page table需要valid bit
- 但一開始還是需要load一些,跑一跑要call了,用stub來找到要call的routine,把它load進memory,再把stub用routine的memory address替換掉
- **Demand Paging**:不把整個process搬進memory,需要的時候才從disk把page load進來
- less I/O need(load時間變少) => faster response
- less memory needed => more users
- **Page fault**
1. Page table裡valid bit是0,觸發Page-fault trap
2. OS收到Page-fault trap
3. OS去process的PCB找
1. invalid => abort
2. Just not in memory => continue
4. Get an empty frame
5. Swap page from disk to the frame
6. 把Page table裡那個page的valid bit設為1
7. Restart instruction(PC -= 4)
- **Free-Frame List**:a pool of free frame,可以拿來用但會先**zero-fill-on-demand**(清零)
- **Copy-on-Write**:COW,用fork創建process時,允許母牛process跟小牛process共用同一 個page,直到有人要write改page再copy
- **Page Replacement**:把page swap out,引進dirty bit,為1才寫回disk
#### ==Page-replacement algorithm==
- **FIFO**:有Belady's Anomaly
- **Optimal**:要會未來視,做不到
- **LRU**:least recent use,可由下面兩種方法實現
- ++Counter++:紀錄上次被用到的時間戳記,需要traverse整個table
- ++Stack++:用doublinked list做stack,被用到就移到頂
- **LRU Approximation**:多一個reference bit,跟計組的used bit一樣,second chance
- Enhenced:用(refernce, modify)
- (0,0):沒用沒改
- (0,1):沒用有改
- (1,0):有用沒改
- (1,1):有用有改
- **Counting**:紀錄各page被用到幾次
- ++LFU++:Least Frequently Used,想說比較少用
- ++MFU++:Most Frequently Used,想說已經被用比較多次了
- **Belady's Anomaly**
- **Stack algorithm**
- **Page-Buffering Algorithm**
- 維護一個free frame pool,每次被要走free frame的時候就把一個victim移進來
- 先不evict掉victim,如此一來,等等又refer到這個frame得時候就不用重新swap in
- 你爽了再把victim evict掉
#### ==Frame-allocation algorithm==
- 要給每個process幾frames
- 要取代哪個frame(蛤??)
- *minimum*:每個process都有自己的minimum pages數需求(比如說,相異變數的數量)
- *maximum*:total frames in system
- - **Fixed Allocation**
- **Equal Allocation**:平均分給每個process
- **Propotional Allocation**:根據process的size大小照比例去分,$S=\sum s_i$
- **Priority Allocation**:根據priority去分,runtime會變,要replace page時可以選自己其他的frame或搶priority比較低的process的frame來用
- - **Global replacement**
- 可搶別人的frame來用
- 要確保每個process都有minimum frames不然會thrashing
- process執行時間變化大
- 但可以提升throughput所以常用
- **Local replacement**
- 只能replace自己被分到的frame
- process performance較穩定
- **Reclaiming Page**:維護一個free-frame list,數量低於一定程度就開始replace,確保隨時有free frame可用
(不知道跟frame buffering差在哪??)
#### Working-Set Model
- working-set window:a parameter $\Delta$
- **working set**:a set of 最近$\Delta$個被用到的page(cost高)
- 用來近似locality model
- working set會隨時間改變

- $D=\sum WSS_i$
- $\sum WSS_i$:process i的working set的size
- m:avaliable frames
- ++D > m++:frame不夠用 => thrashing
- ++D << m++:很多閒置frames => increasce degree of multiprocessing

#### Page-Fault Frequency
- 紀錄每個process page fault的頻率(PFF),並且使用local replacement policy(不然加再多都可能被搶走)
- PFF太高就加frames、太低減frame

## 考古
### Midterm
#### 2005
- Define the essential properties of the following types of operating systems:
- Batch:一個一個做下去,無法介入,non-interactive
- Interactive:有互動性的,可以輸入指令,也能立即看到程式執行結果
- Time sharing: (multitasking),CPU在jobs間快速切換,能有互動性
- Real time:用在需要實時回應的情況下
- Network:An operating system oriented to computer networking
- Distributed:用於連接起來且各自獨立、分離的電腦
- How does an OS achieve Memory Protection?蝦這要讀嗎?
- thread
- user-level thread:management由user-level thread library提供的API所負責
- kernel-level thread:management完全由kernel負責
#### 2006
- four main purposes of an operating system?
1. Program Management
2. Memory Management
3. Handling I/O
4. User Interface
- Multi
- **multiprogramming**: keep multiple jobs in the main memory. If one job gets occupied with Input/output, the CPU can be assigned to other jobs.And CPU always has one to execute
- **multiprocessing**:多個CPU或核心,同時做不同process
- **multitasking**:同一個CPU在不同的jobs間切換,減少response time
- **multithreading**:一個process分很多threads下去跑,這些 threads 共用 code section, data section, and other OS resources。
- 傳參數給OS
1. Regs
2. address of a block in Regs
3. stack
- Process:a program in execution,包含
- text section:code
- program counter、registers
- data section:containing global variables
- stack
- heap
. It providing high-availability service which survives failure.
- asymmetric clustering:has one machine in hot-standby mode,監控server,出事他先扛
- symmetric clustering:has multiple nodes running applications and monitoring each other,不需要空出一個當hot standby node
- **process cooperation**
- advantage:
1. imformation sharing
2. computation speedup
3. modularity
4. convenience
- interprocess communucation(IPC)
1. shared memory
2. message passing
- process v.s. thread
| Process | Thread |
| --- | --- |
| Process is distributed a unit of resource by the O.S. which is an executing instance of a program | Basic unit of CPU utilization |
| Heavyweight process | Lightweight process |
| Content switching cost is high | Content switching cost is low |
| High process management cost (create, destroy, scheduling) | Low thread management cost |
| Unknown processes don’t share memory and resources (except shared memory) | Threads in the same process can share code, data section and other O.S. resources. |
- Why dual mode:
- let OS can protect itself and other system programs
- some instructions are privileged that only can be executed in kernel mode
- response time:一個request被submitted後,經過多久時間才會收到第一次response
- turnaround time:一個process要花多久才能complete
#### 2017
- Which problem do caches solve? What problem do caches cause?
- 把需要的資料從比較慢的storage移到比較快的
- 要更新、保持資料一致
- Context Switch:When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch.
- “Context” of a process is represented in P
B and has
- Process ID
- Process state:running、waiting...
- Program counter
- CPU registers
- CPU scheduling information:priority、queue pointer
- Memory management information:memory allocated to this process
- Accounting information:CPU used、clock time elapsed since start、time limits
- I/O status information:I/O devices allocated to this process、list of open files
- Linux threads:
- What does struct `task_struct` point to?
- process data structure
- Which system call is used to create thread?
- `clone()`
#### 2018
- Direc Memory Access structure(DMA)
- 允許某些IO比較快的內部子系統(GPU、網路卡、音效卡...)直接讀寫memory而無須經過CPU,效率比較快
- Speed of storage
- register > cache > main memory > magnetic disk > optical disk > tapes
- Explain what is "User Mode" and "Kernel Mode"? And why Operating System needs Kernel Mode?
- User Mode:不須像OS那般直接操作硬體,如果需要的話,用system call來使用kernel mode
- Kernel Mode:可進行幾乎所有的硬體操作和系統程式
- Why:比較安全,保護OS
- the advantage of using a virtual machine
- 可用來測試病毒軟體
- scalability:可調控要分配給VM的硬體資源大小(CPU、memory、disk...)
- Development and Testing Efficiency:不須有多台裝置就可以完成跨平台的測試
- the main advantages of the microkernel approach to system design
- 易擴充
- 易移植
- 較可靠(因為簡單)
- 較安全(operation多在user mode完成)
- What is context switch and how can it work?
- CPU在切換process時,儲存現有process狀態,載入新process狀態
- 將當前process資訊、狀態存入PCB,一起丟進memory,並將新process跟它的PCB載入kernel執行
- fork()
- system call create new process
- exec()
- system call use after fork() to replace thread memory space to new program
- benefits of multi-threading
- Responsiveness:may allow execution when part of process is blocked
- Resource Sharing:threads share resources of process, easier then shared or message passing
- Economy:thread is cheaper then process creation and context switching
- Scalability:process can take advantage of multicore architecture
- interrupt
- what is the Interrupt Service Routine(ISR)
- 由vector指向ISR,執行ISR中預定義的中斷處理程式
- parts in process
- Text section:has the program code
- stack:暫存資料,function parameter、return address、local variable...
- program counter:指向現在執行到哪條指令
- data section:global variables
- heap:動態分配的區域
- thread pool
- what:事先建立數個threads放在pool中,需要的時候,直接拿來用
- advantages:
- 比起要用的時候建立thread,能更快的給予服務
- 更好的管理thread總量
- What does the dispatcher module do?
- switching context
- switching to user mode:Changing the mode of operation from kernel mode to user mode when executing user processes.
- directing the CPU to the proper location in the user program to resume that program
#### 2022
- differences between CPU-bound and I/O-bound
- CPU-bound:大部分時間都在做CPU,CPU占用率高
- I/O-bound:大部分時間都在做I\O,CPU占用率低
- Parallel:同時能有多個thread或process在執行
- Concurrency:同時只能有一個thread或process,但他們能同時存在系統中
- thread
- user thread:management done by user-level thread library
- kernel thread:support by kernel
---
{%hackmd @5ood/SJI_TWhVR %}