112考資工研究所時寫的筆記,有錯歡迎留言指正
內文有引用他人筆記/教材/文章的地方
若有侵權十分抱歉,告知後將立刻撤除

資源

縮寫區

  • Spooling: simultaneous peripheral operation on-line
    • 把磁碟當buffer用,把不同工作間的IO和運算重疊(buffer是重疊同件工作)
  • DMA: Direct memory access
  • ISR: interrupt service routine
  • RAID: Redundant Array of Independent Disk
  • FAT: File Allocation Table

CH1 導論

  • Multiprogramming
  • Multitasking (time-sharing)
  • Distributed system
    • tightly coupled
    • loosely coupled
      • client-server
      • peer to peer
  • Multicore and Multiprocessor System
    • multicore 和 multiprocessor都是為了提升程式運算速度
    • multicore system只有一個processor但有多個執行單位: cores
    • multiprocessor system有兩個以上processors
    • Multiprocessors比multicore processors更 reliable
    • multicore processor的traffic比較少
    • multicore system擅長處裡單一program,multiprocessor擅長同時處裡多個程式
    • 現代電腦都有multiple processors且為multicore
  • Real-Time System
    • Hard real time
      • 即時工作必須依規定的時間限制內完成,否則即算失敗
      • 所有會造成處理過久或無法預測之設備或機制宜少採用或不用
        • ex. 不用virtual memory, 少用disk
    • Soft real time
      • real-time process在結束執行前有最高權限
      • 可用virtual memory,但real-time process的page不能回disk直到完工
      • 可time-sharing
  • Batch system
    • 將一些較不緊急、定期性失誤互動性之Jobs累積成堆,再分批送入系統處理

CH2 computer system architecture

CH4 Process

  • Process data in memory
    • Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →
    • code: Text
    • static or global變數: Data
    • local變數: stack
    • malloc出來的: heap
    • ref: mega大筆記CH4-process
  • 7 process state
    • Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →
  • Fork相關
    • vfork: 與fork類似,差別在於子與父共用stack,即子直接延伸使用父之stack,故資料可能會同時修改
    • Fork後不一定是parent或child先,沒有特別規定。
      • 但如果child要Copy-on-write省空間的話,只能給child先,以免parent破壞參數
      • ref
    • pid有三種
      • 正: parent
      • 零: child
      • 負: fail
    • exec/execlp
      • Image Not Showing Possible Reasons
        • The image was uploaded to a note which you don't have access to
        • The note which the image was originally uploaded to has been deleted
        Learn More →
    • wait():此system call用以暫停 直到某個事件發生。ex:父process等子process終止
    • exit():此system call用以終止process的執行、回收資源
      • exit(0)表正常終止
      • exit(-1)表異常終止(通常可以不寫,因為右括號常常是已經內建)
    • 以上都需配合mega筆記題目練習

Deadlock

定義啥的去看講義or課本
Progress v.s Bounded waiting v.s starvation

Synchronization

software-based solution to the critical-section problem

Peterson’s solution

int turn; boolean flag[2]; do{ flag[i]=True; turn=j; while(turn==j && flag[j]); // above: entry section C.S; // flag[i]=False; // above: exit section Remaining part; }while(1)
  1. Mutual exclusion

if P0 in CS> flag[1]=False | | turn=0
if P1 in CS> flag[0]=False | | turn=1

if both process in CS
>flag[0]=flag[1]=False
if turn = 0, P0 enter
if turn = 1, P1 enter.
However, turn can only be 0 or 1!
So P0, P1 can't be in Critical section at the same time.

  1. Progress

suppose P0 want to enter CS
if P1 is not ready>flag[1]=false, P0 enter CS
if P1 and P0 are both ready>flag[0]=flag[1]=True

now check turn:
if turn=0>P0 enter CS
if turn=1>P1 enter CS
Some waiting process can enter CS!

  1. Bounded waiting

if P1 exit CS>flag[1]=False, P0 can enter
if P1 exit CS && set flag[1]=True again
>turn=0, P0 can enter
P0 won't wait infinitely

  1. 這邊只是用兩個process,但也有n-process的解法

producer/consumer problem(bounded buffer)

Producer:

int bucket[SIZE]; int in; int counter; while(True){ nextItem = getItem(); while(counter == BUFFER_SIZE); buffer[in] = nextItem; in = (in + 1) % BUFFER_SIZE; entry_section(); counter++; exit_section(); rest_part(); }

Consumer:

int out; while(True){ while(counter == 0) ; item = buffer[out]; out = (out + 1)% BUFFER_SIZE; entry_section(); counter--; exit_section(); rest_part(); }

use wait() and signal

  • We assume that the pool consists of n buffers, each capable of holding one item.
  • The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the value 1. The empty and full semaphores count the number of empty and full buffers. The semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0.
  • We can interpret this code as the producer producing full buffers for the consumer or as the consumer producing empty buffers for the producer.
  • int n;
    semaphore mutex = 1;
    semaphore empty = n;
    semaphore full = 0
  • Producer
  • Consumer

Bakery algorithm

  • for process Pi
do{ choosing[i] = True; num[i] = max(num[0], num[1], num[2], ...num[n-1]) + 1; choosing[i] = False; for(int j=0;j<n;j++){ while(choosing[j]); while(num[i]!=0 && (num[j],j)<(num[i],i)); } // locking done! Critical section num[i]=0 // release lock! rest_part(); }while(True)
  1. Mutual exclusive
    unique ticket num
  2. Progress
    最多就是N個process執行(max那行)
  3. Bounded-waiting
    processes enter CS on a First-Come, First Served basis

Atomic TestAndSet

bool testAndSet(bool &lock){ bool value = lock; lock = True; return value; } // For process Pi do{ while(testAndSet(lock)); Critical section; lock = False; }while(True);
  1. Mutual exclusion
    Yes,先碰到testAndSet的人會先把lock變true,如果對方想進去,會擋在while那邊(value吃到Lock的true)
  2. Progress
    Yes,第一個搶到的人會開始跑
  3. Bounded waiting
    No,如果很巧每次都給p0搶到,他就能一直進Critical section
more details

bounded waiting testAndSet

Atomic Swap

恐龍書版:

bool lock = false; int Swap(int *lock,int oldvalue,int newvalue) { int value = *lock; if(*lock == oldvalue) *lock = newvalue; return value; } //for Pi. lock was initialize as false do{ while(Swap(&lock,0,1)); Critical section; lock=false; }

周志遠版:

//for Pi do{ key[i]=True; while(key[i]==True){ Swap(lock,key[i]); } Critical section; lock=False; }
  1. Mutual exclusive
    Yes, 類似TestAndSet
  2. Progress
    Yes, 同TestAndSet
  3. Bounded-waiting
    No, 同TestAndSet

Mutex lock

The hardware-based solutions to the critical-section problem. Generally considered the simplest synchronization tools.
(mutex is short for mutual exclusion)

acquire() { while (!available) ; /* busy wait */ available = false; }
release() { available = true; }
  • spinlock

    While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the call to acquire().
    In fact, this type of mutex lock is also called a spinlock because the process “spins” while waiting for the lock to become available.

  • Cons:
    Busy waiting wastes CPU cycles such that some other process can't use.

  • Pros:

  1. No context switch is required (context switch may take considerable time.)
  2. When locks are expected to be held for short times, spinlocks are useful.
  3. They are often employed on multiprocessor systems.
    One thread can “spin” on one processor
    Another thread run critical section on another processor.

Semaphore

筆記
Mutex 與 Semaphore 最大的差異
A Mutex is different than a semaphore as it is a locking mechanism while a semaphore is a signalling mechanism. A binary semaphore can be used as a Mutex but a Mutex can never be used as a semaphore.

  • counting semaphore
    • can range over an unrestricted domain.
  • binary semaphore
    • can range only between 0 and 1, behave like a mutex lock

Busy waiting semaphore

Semaphore can't be negative here!
此種semaphore是一個整數變數,提供兩種atomic operation: wait, signal.

  • wait()有時被寫成p()
  • signal()有時被寫成v()
  • src: wiki
    S的初始值是1
wait(S) { while (S<=0) /* busy wait */ ; S--; }
signal(S) { S++; }
  1. Usage
    P1 executes S1 ; P2 executes S2
    S2 be executed only after S1 has completed
P1 P2
S1; wait (sync) ;
signal (sync) ; S2;

Non-Busy waiting (bounded-waiting) semaphore

Semaphore can be negative here!

One way to add and remove processes from the list so as to ensure bounded-waiting is to use a FIFO queue, where the semaphore contains both head and tail pointers to the queue.

wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block();//system calls } }
signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P);//system calls } }

classical Problems of Synchronization

The Bounded-Buffer

Mentioned above

First readers-writers

https://en.wikipedia.org/wiki/Readers–writers_problem

  • readers-preference(讀者優先)
  • this solution is sub-optimal, because it is possible that a reader R1 might have the lock, and then another reader R2 requests access. It would be foolish for R2 to wait until R1 was done before starting its own read operation; instead, R2 should be allowed to read the resource alongside R1 because reads don't modify data, so concurrent reads are safe.
  • Readers share a single rw_mutex lock
  • Writer may have starvation problem.(prove it is preemptive)
    • ex. R1, R2 read the file again and again, W would starve.
semaphore rw_mutex = 1; // mutual exclusion for write semaphore mutex = 1; // mutual exclusion for readcount writer(){ do{ wait(rw_mutex); ... /* writing is performed */ ... signal(rw_mutex); }while(true); }
int readcount = 0; reader(){ do{ wait(mutex); readcount++; if(readcount==1) wait(rw_mutex) signal(mutex); ... /* reading is performed */ ... wait(mutex); readcount--; if(readcount==0) signal(rw_mutex); signal(mutex); }while (true); }

Second readers-writers

  • New constraint: no writer, once added to the queue, shall be kept waiting longer than absolutely necessary
  • writers-preference(寫的人優先)
int readcount, writecount; //(initial value = 0) semaphore rmutex, wmutex, readTry, resource; //(initial value = 1) //READER reader() { <ENTRY Section> readTry.P(); //Indicate a reader is trying to enter rmutex.P(); //lock entry section to avoid race condition with other readers readcount++; //report yourself as a reader if (readcount == 1) //checks if you are first reader resource.P(); //if you are first reader, lock the resource rmutex.V(); //release entry section for other readers readTry.V(); //indicate you are done trying to access the resource <CRITICAL Section> //reading is performed <EXIT Section> rmutex.P(); //reserve exit section - avoids race condition with readers readcount--; //indicate you're leaving if (readcount == 0) //checks if you are last reader leaving resource.V(); //if last, you must release the locked resource rmutex.V(); //release exit section for other readers } //WRITER writer() { <ENTRY Section> wmutex.P(); //reserve entry section for writers - avoids race conditions writecount++; //report yourself as a writer entering if (writecount == 1) //checks if you're first writer readTry.P(); //if you're first, then you must lock the readers out. Prevent them from trying to enter CS wmutex.V(); //release entry section resource.P(); //reserve the resource for yourself - prevents other writers from simultaneously editing the shared resource <CRITICAL Section> //writing is performed resource.V(); //release file <EXIT Section> wmutex.P(); //reserve exit section writecount--; //indicate you're leaving if (writecount == 0) //checks if you're the last writer readTry.V(); //if you're last writer, you must unlock the readers. Allows them to try enter CS for reading wmutex.V(); //release exit section }

以下兩個自行去wiki連結看

  • Third readers–writers problem
  • Simplest reader writer problem

The Dining-Philosophers(Monitor)

monitor dp{ enum{thinking, eating, hungry} state[5]; void pickup(int i); void putdown(int i); void test(int i); void init(){ for(int u=0;i<5;i++) state[i]=thinking; } }
void pickup(int i){ state[i]=hungry; test(i); if(state[i]!=eating) self[i].wait() }
void putdown(int i){ state[i]=thinking; test((i+1)%5); test((i-1)%5); }
void test(int i){ if(state[(i+1)%5]!=eating && state[(i-1)%5]!=eating && state[i]==hungry){ state[i]=eating; self[i].signal(); /* * if Pi is suspended(in pickup), resume it. * else do nothing. * */ } }

Monitor

  • monitor 是一個用來解決同步問題的高階「資料結構」,是一種 ADT ( Abstract data type )。並確保一次只有一個process可以active within the monitor
  • 組成架構
    • 共享變數宣告區
    • 一組 local funtions or procedures
    • 初始區 ( Initialization area )
  • Used for preventing foolish programmer error:
    • many processes may be executing in CS
      • signal(mutex);

        critical section

        wait(mutex);
    • Causing deadlock
      • wait(mutex);

        critical section

        wait(mutex)
  • following code is Hoare Monitor
  • 用semaphore實作(裡面的Queue是FCFS/FIFO)
  • 參考:
procedure entry_monitorfunction() wait(mutex); /* body */ if(next_count > 0) // 若 P 存在,必先讓 P 恢復執行 signal(next); else // 若沒有P在等待,才讓下一個 Q 進入執行 signal(mutex);
wait(){ x_count++; // Q 的個數加一 if(next_count > 0) // 將要blocked前的準備動作(與上述程式碼相同) signal(next); else signal(mutex); wait(x_sem); // Q 將自己 block x_count--; // 當 Q resume 後,將x_count 減一 }
signal(){ if(x_count > 0) // 先前若有 Q blocked 才有作用 next_count++; signal(x_sem); // resume Q wait(next); next_count--; }

Dekker's algorithm

flag: array [0,1] of boolean; turn= 0 or 1; do{ flag[i] = true; while (flag[j]) if (turn == j) flag[i] = false; while (turn == j) no-op; flag[i] = true; critical section turn = j; flag[i] = false; remainder section }while(True)
  1. Mutual exclusive
    • Pi, Pj皆想進入CS,當turn=i時,Pi進入而Pj等待,反之類似
  2. Progress
    • 若turn=i但Pi的flag[i]=false,Pj想進入CS必可進入(Pj不會進Line 5的迴圈)
    • 有限時間內,turn可以被決定成i or j,讓Pi or Pj進入CS,No deadlock
  3. Bounded waiting
    • turn=i,Pi進入,Pj等待。當Pi離開CS,會將turn設成j,Pj就能進CS。

參考:geeksforgeeks

其他

  • Disable interrupt在 Uniprocessor中有Mutual exclusive 的效果,但不適用於Multiprocessor中,因為其他CPU也能動到共享變數
    • 見考古25, 26
  • spinlock不建議用在uni-core processor,代價比multicore processor高很多
  • bitmap不是file

Virtual memory

  • Effective access time (EAT):

    • P × hit time + (1-P) × miss time
      • ex. 0.80 x 120 + 0.20 x 220 = 140
  • Average memory access time (AMAT)

    • AMAT = Hit time + Miss rate × Miss penalty
  • stack algorithm

    • A property of algorithms
    • Stack algorithm: the set of pages in memory for n frames is always a subset of the set of pages that would be in memory with n+1 frames
    • Stack algorithms do not suffers from Belady's anomaly
    • Both optimal algorithm and LRU algorithm are stack algorithm
  • Thrashing

    • If it is spending more time paging than executing, called thrashing
    • 達成前提:
      1. global replacement
      2. demand paging
      3. multiprogramming
    • 發生步驟
      • processes queued for I/O to swap (page fault)
      • low CPU utilization
      • OS increases the degree of multiprogramming
      • new processes take frames from old processes
      • more page faults and thus more I/O
      • CPU utilization drops even further
    • 解法: Working-set model, Page-fault frequency
    • Working-set model (以系統觀點)
    • Page-fault frequency (以process觀點)
      • measures and controls the page-fault rate to prevent thrashing
      • Establish upper and lower bounds on the desired page-fault rate of a process
      • If page fault rate exceeds the upper limit
        • allocate another frame to the process
      • If page fault rate falls below the lower limit
        • remove a frame from the process

File system

  • Demand paging
    • 沒有把整個process的pages都抓入memory,而是用lazy swapper 或稱 pager,只載入用到的page
    • 對sequencial access很有利 (spatial locality)

RAID

  • Redundant Array of Independent/Implement/Inexpensive Disks (三個\(I\)都有人用)
    provide reliability via redundancy
    improve performance via parallelism

RAID 0

  • non-redundant striping (interleaving)
  • Performance
    • read: N times
    • write: N times
  • Pros and Cons
    • pros: 用在存取效能高,但可靠度不重要的場合
    • Cons: 任意一硬碟發生錯誤即發生資料遺失

RAID 1

  • Mirrored disks
  • Performance
    • read: N times
    • write: same
  • Pros and Cons
    • pros: 容錯度高
    • Cons: 最多備份磁碟>最貴!

RAID 2

  • Hamming code (Double error detecting and single correcting code)
  • Performance
    • read: 沒查到
    • write: 較慢, 要重新寫入EEC硬碟(Error correcting code)
  • Pros and Cons
    • pros:
      • 一個硬碟壞了能救回來(壞兩個只能偵測,救不回)
      • 空間使用率比RAID1好 (75% overhead)
    • Cons:
      • 與RAID 3相比,雖然Availability相同但成本卻比較高
      • 現在沒人用

RAID 3

  • Bit-level striping / Bit-interleaved parity
  • Performance
    • read: n-1 times
    • write: same (Bottleneck: parity disk)
  • Pros and Cons
    • pros:
      • 更好的空間使用率(33% overhead)
      • 單次大量連續的IO作很快
    • Cons:
      • 分散少量的IO卻很慢
      • 存/計算parity bit overhead
      • 存parity的disk耗損快
      • 現在沒人用

RAID 4

  • Block-level striping
  • Performance
    • read: n-1 times
    • write: same (Bottleneck: parity disk)
  • Pros and Cons
    • pros:
      • RAID4 有更高的throughput,controller不需讀很多disk來重建block
    • Cons:
      • 存parity的disk耗損快
      • 現在沒人用

RAID 5

  • Distributed block-interleaved parity
  • Performance
    • read: n times
    • write:
    • n/(n-2+2) = 1 same (左)
    • n/(2+2) = n/4 (右)
  • Pros and Cons
    • pros:
      • 改良頻繁對「Parity desk」之讀寫使得該磁碟易損耗
      • 避免平行寫入時會有Bottleneck
      • Parity block分散存於不同的磁碟之中,並非集中在一部磁碟
      • 空間利用率比鏡像高
    • Cons:
      • 回復毀損資料需時較久
      • 至少需要三部磁碟組成 RAID5
      • 保障程度要比鏡像低

RAID 6

  • P+Q Dual Parity Redundancy
  • Performance
    • read: 不一定,看實作
    • write: n/6 times
  • Pros and Cons
    • pros:
      • 2塊磁碟同時失效也能恢復
      • 資料安全要求很高
    • Cons:
      • another level of parity
      • 寫入計算更大量
      • 效能較差

RAID 01


  • Stripe then replicate
  • Pros and Cons
    • pros:
      • 安全性相較RAID 0 高,讀寫性能相較RAID 1高
    • Cons:
      • 以4顆硬碟組成RAID 01為例, 可以容許壞同一組的兩顆硬碟,還能正常運作。若兩顆壞在不同邊,則有可能會讓RAID損毀(視壞哪兩顆硬碟決定)。

RAID 10


  • Replicate then stripe
  • Pros and Cons
    • pros:
      • 安全性相較RAID 0+1高,讀寫性能相較RAID1高
    • Cons:
      • 以4顆硬碟組成RAID 1+0為例,先將兩組兩顆硬碟組成RAID 1後,再將兩組組合成RAID 0。4顆硬碟最多可以壞兩顆硬碟還能正常運作,但須注意兩顆若是壞在同一組,則會導致RAID損毀。
  • 參考資料:

Disk scheduling algorithm

  • FCFS
    • 公平
  • SSTF (shortest seek time first)
    • 普遍,但不是最優的,且不公平
  • SCAN (elevator algorithm)
    • 沒有飢餓的問題
  • C-SCAN
    • 更平均的等待時間
  • LOOK
  • C-LOOK
  • 這類題型陷阱/要注意的地方

    • 如果題目沒給tracks範圍,只有給數量n(ex. 500),需自行轉換成從0到n-1(ex. 0-499)
    • 有c的時候,要記得算拉回讀寫頭的距離
    • 沒有look的時候,一定要走到底
    • 參考ch9考古7,15(p9-35, 9-39)
  • Disk scheduling algo總結

    • Disk scheduling的performance和number of head movement, seek time and rotational latency等因素高度相關,因此沒辦法說哪個是最佳解
    • 但一般而言,look和SSTF的表現比其他演算法好
    • ref

分散式系統

  • 定義: A distributed system is a collection of loosely coupled nodes interconnected by a communication network.

Advantages of Distributed Systems

  • Resource sharing
  • Computation Speedup
    • 運算可備切成子問題,平行處裡
    • called load sharing or job migration
  • Reliability
    • 一個site掛了,剩下的sites還能跑
  • Communication
    • 單一設備縮小,可靠通訊連結遠處的其他設備取得所需功能

Types of Network-based Operating Systems

直接看別人寫的東西比較快?

  • Network operating system
    • remote login
      • ssh
    • remote file transfer
      • sftp (secure file transfer protocol)
      • user一次只能從一處下載data,可能會造成資料不一致問題
  • Distributed Operating Systems
    • Data Migration
      • 要注意檔案在不同系統間是否相容
      • 如果檔案太大,可用類似demand paging的方式,只傳送/修改需要的部分
    • Computation Migration
      • A用RPC(remote procedure call)給B參數,B算完傳回給A
      • 或是A傳一段帶有指定工作的message,B收到後建process Q,算完回傳A
    • Process Migration
      可能使用的原因:
      • Load balancing
      • computation speedup
      • Hardware preference
      • Data access
  • Communication protocols
    • OSI model:
    • physical layer
    • link layer
    • network layer
    • transport layer
    • session layer
    • presentation layer
    • application layer
    • 呃怎麼都在講計網概的東西,我先跳過了
  • Design Issues
    • transparent: user最好把存取遠端資源視為存取本地資源
    • scalability: 很抽象,ex. CPU如果太快,IO會跟不上,所以調整CPU速度時要很小心?
  • Distributed File Systems

考前必讀

以下都是我對Jerry(周志遠教授)的了解,他喜歡考的內容:

Explain each step of the following mechanism

hardware interrupt

  • 硬體任何時間能發出的信號給CPU;interrupt查表比較快,被動(user program被打斷)

software interrupt (Trap)

  • 軟體當發生error(segmentation fault、/0…)或使用者需要OS服務(system call);case switch 找要執行哪個程式(if else),數量彈性,主動

context switch

thrashing & working wet

  • If it is spending more time paging than executing, called thrashing
    • 達成前提:
      1. global replacement
      2. demand paging
      3. multiprogramming
    • 發生步驟
      • processes queued for I/O to swap (page fault)
      • low CPU utilization
      • OS increases the degree of multiprogramming
      • new processes take frames from old processes
      • more page faults and thus more I/O
      • CPU utilization drops even further
    • 解法: Working-set model, Page-fault frequency
    • Working-set model (以系統觀點)
    • Page-fault frequency (以process觀點)
      • measures and controls the page-fault rate to prevent thrashing
      • Establish upper and lower bounds on the desired page-fault rate of a process
      • If page fault rate exceeds the upper limit
        • allocate another frame to the process
      • If page fault rate falls below the lower limit
        • remove a frame from the process
    • working set

Interrupt IO

DMA

  • 分清楚每個步驟是誰在做事(device driver, disk controller, DMA controller)
寄信提問

7/2

問題一:
safety algorithm和banker's algorithm的關係是什麼?
下面是教授上課投影片,我從裡面理解的意思是
safety algorithm會試著找safe sequence
而banker's algorithm會用safety algorithm運算的結果(找到or沒找到safe sequence)來判斷要不要放行資源給該request的process使用
又因為safety algorithm時間複雜度是\(O(m*n^2)\),banker's algorithm只是根據他的結果判斷是否放行資源,因此兩者的時間複雜度都是\(O(m*n^2)\),其中m是資源種類,n是process。請問我這樣理解是可以的嗎?

Jerry Chou:

這裡不用太在意safe algorithm
這只是banker algorithm在描述自己作法上,把作法分成兩步驟
並把其中負責找saft sequence的動作稱為safe algorithm

問題二:
答案寫到,要避免deadlock發生,必須讓該資源最大需求總合< m + n
m是資源數量,n是process數量
但這個結論對我來說不是很直覺,想請問教授答案這樣寫的原因。



Jerry Chou:
他的列式應該是整理過後的簡式

我的思考邏輯如下:
這題你要考慮的就是從banker's algorithm的worst case來看
worst case發生在每一個process手上都握有(max demand - 1)的資源量
因為如果握有max demand的資源就代表他可以順利執行完,所以剛好差一個資源是最糟的情形

根據banker's algorithm如果在這最糟的時候還可以找到safe sequence就代表至少還要有一個資源剩下
所以你會得到: \(m - {n*(3-1)} > 0\)
整理後會得到
=> \(m - n*3 + n >0\)
=> \(m+n > n*3\)
n*3剛好就是sum(max_i) => 符合你的答案

我會認為比較直覺的式子應該寫成
m > sum_n (max_i - 1)
左邊是資源總數
右邊是worst case下的最大資源用量
所以資源要足以應付最糟的情形

因此這題要考的就是worst case的理解

thx

7/14

考試畫FIFO algorithm時,在表達上是否需要顧慮數字上下的關係?
我自己個人會習慣把新來的/最近一次被reference到的page number放在最下面
如下圖答案(1)中被我圈起來的部分
因此表示上會與答案有些不同
我記得當時上教授的課時,教授是不要求順序的
但不確定是否各校老師都是這樣看待
還是我考試用這邊的答案形式會比較好?

Jerry Chou:
順序只是讓自己在計算過程中比較容易理解或推算
就像你的習慣是把新來的/最近一次被reference到的page number放在最下面
其他人自然也可以有不一樣的習慣或喜好

除非題目有說明順序的定義
不然我還是認為不需要顧慮上下順序

同樣是上圖的(2)小題答案,second algorithm我上網查的作法
都是把第一次進入queue的page number reference bit設成0
但這邊寫成1
恐龍書上也沒有寫到具體作法,只有講概念
想問教授實際上答案應該是怎樣?

Jerry Chou:

我認那只是對於演算法中second chance要label成1 or 0的定義差別
不影響計算結果與演算法的定義,所以理論上不應該為此扣分數

反而是順序的部分,有可能默認位置是代表實體記憶體的frame位置
那麼replace page時就不能隨意更動其他page的位置
必須如答案所示,新的page必須放在被replace的page位置
(不過開始位置倒是仍不受限制,1, 2, 3 or 3, 2, 1 任意擺放都無所謂)