# OS 期中考古 誒我會在考試的時候把這份共筆改成只能觀看ㄛ https://ithelp.ithome.com.tw/articles/10229521 課程名稱︰作業系統 課程性質︰必修 課程教師︰薛智文 開課學院:電資學院 開課系所︰資工系 考試日期(年月日)︰2019/04/17 考試時限(分鐘):180 試題 : `This is an open-book and open-own-note exam. Please close copied notes and do answer with your own words in the order of question number in EXACTLY one given answer sheet. Make your own assumption if the question is not clear enough. You can answer in Chinese and keep these papers. Have fun.` 1. [10%] Describe two (new) OS features which can be improved by hardware, software and theory efforts for better perfomance. Elaborate the efforts as possible. - Program Execution - Resource allocation ![](https://i.imgur.com/0x5fcId.png) --- 2. [10%] - For some OS, there is an extra state, ZOMBIE, between RUNNING and TERMINATED. Why? [5%] - How can we have the same function without the ZOMBIE state? [5%] > - A parent can fetch its child’s termination status. > - Without zombie state , the relationship between parent and children may miss > 1. Using wait() system call > 2. By ignoring the SIGCHLD signal > 3. By using a signal handler > - https://www.geeksforgeeks.org/zombie-processes-prevention/ --- 3. [10%] Roughly, how many processes and threads respectively after Ubuntu boots? - https://www.thegeekstuff.com/2011/02/linux-boot-process/ - https://danielmaker.github.io/blog/linux/start_kernel.html - 3 / --- 4. [10%] Strassen algorithm can speed up a matrix multiplication with 7 smaller matrix multiplications. For many matrices stored in a file in a single-processor computer, if we implement a matrix multiplication by Strassen algorithm with 7 threads of smaller matrix multiplications, discuss the speedup scenarios. Hint: it might be faster or slower. - slower - In terms of speed of computation, No. In fact things will slow down due to the overhead of managing the threads. - https://stackoverflow.com/questions/47703/multiple-threads-and-performance-on-a-single-cpu --- 5. [15%] Prove or disprove that preemptive shortest job first scheduling(SJTF) always has shorter average waiting time than non-preemptive one(SJF). >- Prove. >- Waiting Time = Complete Time - Arrival Time - CPU Burst Time >- If we have n processes , and in SJTF scheduling we seperate it in k parts (k>=n) >- If in SJTF scheduling in-order bursts are $x_1, x_2, ..., x_k$, then: >- If in SJF scheduling in-order bursts are $x'_1, x'_2, ..., x'_n$, then: >In SJF , for each dispatched process, Compelet time = $1st: x'_1$ $2nd: x'_1 + x'_2$ $3rd: x'_1 + x'_2+x'_3$ ... $nth: x'_1 + x'_2 + ... + x'_n$ => Total Complete time = $(n) x'_1 + (n-1) x'_2 + ... + (2) x'_{n-1} + (1) x'_n$ >In SJTF , for each dispatched process, Compelet time might be = $1st: x_1 + x_2$ $2nd: x_1 + x_2 +x_3$ $3rd: x_1 + x_2 +x_3+x_4$ ... $nth: x_1 + x_2 + ... + x_k$ => Total Complete time = $(n) y_1 + (n-1) y_2 + ... + (2) y_{n-1} + (1) y_n$ $y_n<=x'_n$ - http://ryanlei.blogspot.com/2008/11/os-sjf-is-optimal-scheduling.html --- 6. [15%] Describe a mouse event as detail as you can from generated to shown in a desktop of a virtualized OS environment such as Ubuntu in Virtual Box on Windows 10. Hint: There should be a real mouse interrupt in Windows 10 and a virtualized one in Ubuntu. 7. [15%] For the following codes related in Linux kernel list structure,describe how an entry of list of msg_sender can be accessed,especially what the two 0's mean! [10%] Why a prefetch statement previously included in list_for_each_entry macro was removed? [5%] ```c= #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) #define list_entry(ptr, type, member) \ container_of(ptr, type, member) #define container_of(ptr, type, member) ({ \ const typeof( ((type *)0)->member ) *__mptr = (ptr); \ (type *)( (char *)__mptr - offsetof(type, member) );}) #define list_for_each_entry(pos, head, member) \ for (pos = list_entry((head)->next, typeof(*pos), member); \ &pos->member != (head); \ pos = list_entry(pos->member.next, typeof(*pos), member)) struct list_head {struct list_head *next, *prev;}; struct msg_sender { struct list_head list; struct task_struct *tsk; }; struct list_head *tmp; struct msg_sender *mss; mss = list_entry(tmp, struct msg_sender, list); ``` 8. [25%] - Consider a process synchornization problem, in which we have N people competing for M chairs, and each chair can be used by one person at a time. - Let the competing go by rounds. That is, N people compete for M chairs in the beginning of each round, and the M winners release their chairs at the end of each round so that the next round starts. - Please use ==semaphores== to write programs for the entry and exit sections of the M people so that every person will win at least W times in competing the chairs for every R consecutive rounds. - Hint: M/N < W/R needs to be checked. A n-process barrier, where waiting processes will be released until the n-th one has arrived, and barrier_wait (barrier *b, int n) might be helpful. --- Statement Question: The bigger time quantum, the less average turnaround time. 回答選擇群組 - ==No close reason found, I will provide one in exam paper.== - 趨勢來說應該會越來越小,但是會有一些例外 - 如果burst time高的process在前面,那提升time quantum反而會提高總體的turnaround time - No, It might have thrashing. - No, It depends on the number of processors. - Yes, Because of Belady’s anomaly ![](https://i.imgur.com/KKQaWxl.png) --- Statement Question: The more threading, the more performance. 回答選擇群組 - ==No, It depends on the number of processors.== - No close reason found, I will provide one in exam paper. - Yes, Because of Belady’s anomaly - No, It might have thrashing. --- Short-Answer Question: - [20 points] A. What are the evaluation criteria of the mask ordering system for COVID-19? List 5 of them better not from the textbook. > - 公平 > - 大家都能拿到 > - 花費時間不要太長 - [20 points} B. From optimizing the top 2 criteria of A, design a scheduling algorithm for making reservation. - [20 points] C. Where might the system have shard variables and need to be locked sometimes? Write pseudo codes to show two of the shared variable and the synchronization using semaphore or mutex. - [20 points] D. Provide a correct solution of Homework 3. What is wrong in the barrier implementation? Solution? --- 課程名稱︰作業系統 課程性質︰大二必修 課程教師︰郭大維 開課學院:電機資訊學院 開課系所︰資訊工程學系 考試日期(年月日)︰2018/05/23 考試時限(分鐘):180分鐘 試題 : Operating Systems Mid-Term Spring 2018s The exam is 180 minutes long. The total score is 112pts. Please read the questions carefully. 1. Terminologies (24pts). a. init (Hint: Booting; a UNIX process) - A user process that initializes system processes, e.g., various daemons, login processes, after the kernel has been bootstrapped. b. Dual-Mode Operations (Hint: Mode Bit) - Privileged and user-mode instructions, where privileged instructions are machine instructions that may cause harm. c. Memory Protection (Hint: Hardware Protection) - Prevent a user program from modifying the code or data structures of either the OS or other users! d. A Modular Kernel - A set of core components with characteristics: layer-like - modules;microkernel-like - the primary module. e. Asymmetric Addressing for Direct Communication - In Direct Communication, sender must specify the receiver ID, but the receiver does not need to specify the sender ID. For example, Send(P, msg), Receive(id, msg). f. Remote Procedure Call (RPC) - A way to abstract the procedure-call mechanism for use between systems with network connection. g. FIFOS of UNIX - Named pipes of UNIX. It is a file in the file system and created by mkfifo(). It offers Half Dulex and Byte-Oriented Transmissions. h. Implicit Threading - Transfer the creation and management of the threading from the application developers to compilers and run-time libraries. 2. For the implementation of the UNIX process control block PCB[], we have proc[] and .u, which contain different attributes for different purposes (e.g., those must be known regardless of whether the process is running). Which one is in the kernel or user address space? Are they both in the kernel or user address space? Which one of proc[] and .u signal dispositionresides? Where thread-specific data/thread local storage will reside (in the kernel or user address space)? (8pts) (a) kernel address space: proc[]; user-address space: .u (b) .u: signal disposition \(c\) user address space 3. Please answer the following questions regarding operating systems: (21pts)a. When timing sharing appears as a feature for operating systems, pleasetell us two emerging features/subsystems, beside job synchronization. (6pts) b. Virtual machine software provides an interface that is identical to the underlying bare hardware. I/O over a virtual machine could be slow or fast, depending on the implementations. Please give me a case in whichI/O is fast, compared to I/O directly over physical devices. (5pts) c. There are operating sysrtem services, such as program executions, I/O operations, accounting, resource allocation, error detection. Which oneis belonging to those for user convenience? Which one is belonging to those for system efficiency? (10pts) (a) one-line file systems, virtual memory, sophisticated CPU scheduling. (b) I/O can be fast over a hard disk when the correpsonding disk is implemented over DRAM by emulation. \(c\) user convenience: program executions, I/O operations, error detection;system efficiency: accounting, resource allocation 4. Multithreading is important for multi-core architectures. Please answer the following questions: (12pts) a. Task parallelism is one typical parallelism type. Please give one examplefor task parallelism. (5pts) b. Please provide two challenges in multicore programming. (4pts) c. Why the context switching costs of kernel-level threads is higher thanthat of user-level threads? (3pts) (a) Run merge sorting over cores by partitioning data. (b) Identifying Tasks - Dividing Activities, Balance, Data Splitting, DataDependency, Testing and Debugging. \(c\) Context switching cost is little bit higher because the kernel must dothe switching. --- 5. Compare preemptive scheduling and non-preemptive scheduling. Please answerthe following questions: (21pts) - a. In what circumstance, non-preemptive scheduling will be better than preemptive scheduling? (4pts) - Non-preemptive scheduling could be better than preemptive scheduling when all jobs are ready together at time 0. - b. The selection of CPU scheduling algorithms should consider the scheduling criteria. If the high throughput is the criterion, which oneis the best: Shortest job first, FIFO, and round-robin scheduling. You must provide answer to get credits. (6pts) - (b) Shortest job first. - c. Priority scheduling is a framework for scheduling. Please design a priority assignment formula that gives a lower priority to a job when it turns more time. (5pts) - \(c\) priority = 1 / consumed-CPU-time - d. For Round-Robin Scheduling, if the lower average time turnaround time is the criterion, shall we prefer a lower time slice? Suppose that all jobsare ready at time 0. You must provide answer to get credits. (6pts) - (d) A lower time slice is bad to the low average turnaround time. --- 6. Processor Affinity is somehow against push migration. Please explain why.(6pts) - It is because of the cost in cache repopulation. 7. Please explain the progress assumption concerns for virtualization. (6pts) It is assumed that a certain ammount of progress should be observed for asystem in a given amount of time under virtualization. 8. Consider solutions to the Critical Section Problem. Please answer the following qusetions. (14pts) - a. Consider a computing server to service jobs from three FIFO queues in around robin way whenever there are jobs in any non-empty queues. Pleaseuse semaphores to implement your solution with some pseudo code. (6pts) - b. What is the difference between a binary variable? (3pts) - c. The Monitor-based implementation of the Dining Philosipher problem in thetextbook implements the idea of two-chop-stick-per-time solution. Will ithave the starvation problem? You must provide explanation to receive credits. (5pts) - (a) We let, the dispatcher of each job queue to wait a shared semaphore whenever it has a pending job. Let the semaphore have a FIFO queue. - (b) A condition variable has nothing happening to a signal request if no jobis waiting for the variable. - \(c\) Yes, the starvation problem happens to a philosipher whenever his twoneighboring philosiphers take turn in eating while he is waiting. --- 課程名稱︰作業系統 課程性質︰必修 課程教師︰薛智文 開課學院:電機資訊學院 開課系所︰資訊工程學系 考試日期(年月日)︰100/11/10 考試時限(分鐘):180 是否需發放獎勵金:是 (如未明確表示,則不予發放) 試題 : 1. When describing message passing, what is the difference between synchronous or asynchronous and blocking or non-blocking? [5%] - Blocking : message必須抵達receiver 才能return - non-blocking: 不用等待message抵達即可return - In the synchronous model, the sender blocks until the receiver picks up the message. The two have to rendezvous. - In the asynchronous model, the sender drops the message and continues with its own business. 2. For a system of multiprogramming or multitasking, what kind of applications might be more suitable, respectively, in athe environment? Why? [10%] 3. What is the purpose of upcall? [5%] If your OS does not support upcall such as Linux, how do you implement the same function of upcall? [5%] - upcall:使kernel thread可以傳遞東西給user thread(在一user thread即將block前,產生一新的thread來執行upcall) 4. Vitualization enables multiple OSes running simultaneously on a physical machine. What is "system consolidation" in a virtualized environment? [5%] In what situation, we still need one OS running on a virtualized environment? [5%] 5. Exclude caching; describe 2 examples where Operating System and Computer Architecture help each other to improve performance? [10%] 6. What are the differences between kernel thread and user thread? [5%] In what situation, applications using kernel threads would always have a better performance than using user threads? [5%] - 當會用到blocking的system call - 在需求即時互動,有很多IO指令的單核系統,因為系統需要即時互動,若是選many-to-one,則可能會因為一個user thread做IO卡住kernal thread,整個程式就卡住了。 7. - In what situations, thread pool is not beneficial? [5%] Give example application for each situation? [5%] - when you have a single thread that only needs to do a single task for the life of your program. - Threadpool threads are not suitable for long running operations, as it can easily lead to thread starvation.https://stackoverflow.com/questions/22663194/are-there-any-disadvantages-of-using-a-thread-pool 9. - The shortest job first scheduling algorithm has the optimal average waiting time. Why it is not usally applicable? [5%] - 會有starvation的問題 - What can we do if we would like to achieve the shortest average waiting time? [5%] - 可以使用SJTF 9. - Why timestamp-based protocol ensures conflict serializability? [5%] - Why it also ensures freedom from deadlock? [5%] Hint: what cause conflicting operations and which in the four deadlock condictions can the protocol break? 10. The following codes with semaphores can be used to implement a monitor for synchronization, where F is an external procedure, x is a condition variable. Explain the purposes of using semaphores next [5%] and x-sem[5%]? ```c= semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0; semaphore x-sem; // (initially = 0) int x-count = 0; wait(mutex); ... body of F; ... if (next-count > 0) signal(next); else signal(mutex); ---------------------------------------- x.wait() ┌───────────────┐ │x-count++; │ │if (next-count > 0) │ │ signal(next); │ │else │ │ signal(mutex); │ │wait(x-sem); │ │x-count--; │ └───────────────┘ x.signal() ┌───────────────┐ │if (x-count > 0) { │ │ next-count++; │ │ signal(x-sem); │ │ wait(next); │ │ next-count--; │ │} │ └───────────────┘ ``` 11. The following program uses a monitor to solve the dining-philosopher problem. Explain why it might have starvation problem. [5%] How to solve it? [10%] - Hint: refer to question 10, no coding is required. Pseudo code might be. monitor DP { enum { THINKING; HUNGRY, EATING) state[5]; condition self[5]; void pickup(int i) { state[i] = HUNGRY; test(i); if (state[i] != EATING) self[i].wait; } void putdown(int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5); } void test(int i) { if ((state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING)) { state[i] = EATING; self[i].signal(); } } initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } } * starvation:可能出現0先eating,此時4去test後依然為hungry因此進入wait狀態。而這時3也開始eating,0號putdown時發現4號依然沒辦法拿到兩隻筷子(因為3號拿走另一隻),所以無法叫醒4。若後來在3號吃完前0又進入test,就會產生starvation的現象。 * --- 課程名稱︰作業系統 課程性質︰必修 課程教師︰薛智文 開課學院:電資學院 開課系所︰資工系 考試日期(年月日)︰2008.11.6 考試時限(分鐘):180 是否需發放獎勵金:是 (如未明確表示,則不予發放) 試題 : 1. In the implementation of context switch in Linux, why change from using hardware instruction to software function?[10%] - As hardware approach saves almost all the register state it is slow when compared to the software approach. Performance being the main target of any operating system, software context switch is widely used in most modern operating system. Moreover, 64bit architecture does not support hardware context switches and are reliable only on software context switching. - https://shobhitsharda.wordpress.com/2011/05/31/context-switch-software-vs-hardware-approach/ 2. - What is the purpose of system call?[5%] - A system call is a way for programs to interact with the operating system. - What kind of system calls are involked if we issue shell command, insmod, to insert kernel module in LUNIX?[5%] 3.A virtual machine provides an interface identical to the underlying bare hardware so that users feel they have a machine of their own. What features the virtual machine can not provide as a real machine?[10%] 4.In a real system, all buffer sizes are limited to physical memory. However, sometimes, we do need an unbounded buffer. Give an example and reason[5%] and describe how it can be implemented?[5%] 5.We use thread pools because usually it is slightly faster to service a request with an existing thread than create a new thread. In what situation, with example, we do not use thread pool? Why?[10%] 6. For the following example, change only one number(<10), so that using SJF and Preemptive SJF both have the same average waiting time. Figure out the number and average waiting time accordingly.[10%] Process Arrival Time Burst Time P1 0.0 7 P2 2.0 3 P3 4.0 1 P4 5.0 4 7.Optimistic Concurrent Control (OCC) approach was proposed to do concurrent control for multiple transactions. The idea behind OCC is "do whatever you want without any locking of resources optimistically; if there is any problem, check it later and roll back if necessary". Is OCC deadlock-free? Why?[5%] What might be the problem of using OCC?[5%] 8. The following is the bakery algorithm to solve multiple processes entering a critical section, where (a,b) < (c,d) means if a < c or if a = c and b < d. - In what situation, two different processes get the same number?[5%] - 當他們同時在choosing - What is the purpose of Line 6, while choosing[j] do no-op?[5%] - 要等在choosing的process選完號碼結束。 - What happen if we delete it?[5%] - Without locking 讓5是當前最大數字 當P1和P4同時拿號碼,但是P4比P1結束的早 P1 = 0; P4 = 6 —> P4 will enter the CS P1拿到號碼之後P1 = P4 = 6 P1將也會進入CS - With locking P4將需要等到P1完成抽號碼的動作 在比較前,P1 & P4將會有number “6” - [Note: Mutual exclusion, deadlock, progressing, starvation...] ![](https://i.imgur.com/1i5Qw30.png) 9.Find a save sequence if any.[5%] Can request (2,2,0) by P4 be granted?[5%] Find a request of P4 that can be granted at this stage.[5%] Allocation Need Available A B C A B C A B C P0 1 1 0 8 4 3 2 3 1 P1 3 0 1 6 0 0 P2 3 0 2 0 4 0 P3 2 1 1 1 1 0 P4 0 0 2 4 3 1 10.The following program uses two semaphores, first-delay and second-delay,implementing a critical region. Can any requirement of critical regions be released to implement it using only one semaphore, excluding the mutex?[5%] Write a solution for it.[5%] wait(mutex); while(!B){ first-count++; if(second-cout > 0) signal(second-delay); else signal(mutex); wait(first-delay); first-count--; second-count++; if(first-count > 0) signal(first-delay); else signal(second-delay); wait(second-delay); second-count--; } S; if(first-count > 0) signal(first-delay); else if(second-count > 0) signal(second-delay); else signal(mutex); --- 課程性質︰必修 課程教師︰薛智文 開課學院:電機資訊學院 開課系所︰資訊系 考試日期(年月日)︰2012/4/18 考試時限(分鐘):180 是否需發放獎勵金:是 (如未明確表示,則不予發放) 試題 : --- 1.When we design an OS, give an example when user goals and sys tem goals are conflicted? [5%] - ![](https://i.imgur.com/1N1TesN.png) - Convenience <-> Efficiency --- 2.For a system call issued in a virtualized platform, what problem might occur? [5%] How to solve it? [5%] 3.What are the differences of a Web OS and a Cloud OS? [10%] 4.What is the purpose of upcall? [5%] If your OS such as Linux does not support upcall, how do you implement the same function of upcall?[5%] 5.Virtualization enables multiple OSes running simultaneously on a physical machine. What is "para-virtualization"? [5%] In what situation, we still need only one OS running on a virtualized environment? [5%] --- 6.What are the differences between simulation, emulation, and implementation of a scheduler? [10%] - A ==simulation== is a system that behaves similar to something else, but is implemented in an entirely different way. It provides the basic behaviour of a system, but may not necessarily adhere to all of the rules of the system being simulated. It is there to give you an idea about how something works. - An ==emulation== is a system that behaves exactly like something else, and adheres to all of the rules of the system being emulated. It is effectively a complete replication of another system, right down to being binary compatible with the emulated system's inputs and outputs, but operating in a different environment to the environment of the original emulated system. The rules are fixed, and cannot be changed, or the system fails. - https://softwareengineering.stackexchange.com/questions/134746/whats-the-difference-between-simulation-and-emulation - ![Uploading file..._uf47ustpb]() --- 7.What are the differences between kernel thread and user thread? [5%] In what situation, applications using kernel threads would not always have a better performance than using user threads. [5%] - ==user level thread== Thread是由user level提供,其支援為kernel之上,且不需kernel之介入,也因此執行速度較快. 其缺點為若kernel是single-threaded,blocking一個thread會導致整個process都block. - ==kernel level thread== Thread是由kernel level提供,調度和管理都比user level thread慢,但其必勉了user level thread的缺點. 優點為可以安排多個thread在不同的的CPU上執行,發揮了multiprocessor的效益. - When we don't need to trap in system. ``` 6. Kernel-level threads are superior than uer-level threads do in many ways. What is the main disadvantage of kernel-level threads ? With OpenMP in program development, shall we prefer kernel-level or user-level threads ? You must provide explanation to receive any credits. (8pts) ``` It is context switching cost. With OpenMP, we should prefer kernel-level threads because we look for parallelism to better utilize multiple cores of a system. --- 8.When multiple threads run in parallel on a multicore system, would the more threads for the same task always finish earlier when the number of threads is less than the number of cores? Why? [10%] --- 9.Does preemptive shortest job first scheduling algorithm have the optimal average waiting time? Prove it. [10%] - Yes. --- 10.Describe the problem for the following implementation of a critical section. [10%] ![](https://i.imgur.com/UK3ffT1.png) A: (個人不太確定,求各位好手適時修正)阿金奇讚ㄛ 假設有兩個Process: $P_0,P_1$,且$P_0.i = 0, P_0.j = 1,P_1.i = 1, P_1.j = 0$ 在初始狀況下(所有flag皆為FALSE),當$P_0$執行到 ```turn = 1```時,context switch到$P_1$,變數turn又被改到0,因為flag[0]還是FALSE,所以$P_1$可以進到critical section,但是如果又context switch 回$P_0$,因為turn 已經被改回0了,這時```turn == 1``` 就不成立,一樣可以進入critical section,這樣違反了critical section只能單一process進入存取的限制,不符合mutex的條件 --- 11.The following codes with semaphores can be used to implement a monitor for synchronization, where F is an external procedure, x is a condition variable. Explain the purpose of using semaphores ==next== [5%] and how the monitor can help for synchornization? [10%] ![](https://i.imgur.com/71tyFNJ.png) - Since a signaling process must wait until the resumed process either leaves or waits, an additional binary semaphore, next, is introduced, initialized to 0. The signaling processes can use next to suspend themselves. An integer variable next count is also provided to count the number of processes suspended on next. - It is the collection of condition variables and procedures combined together in a special kind of module or a package. The processes running outside the monitor can’t access the internal variable of the monitor but can call procedures of the monitor. Only one process at a time can execute code inside monitors. --- ## 2018 薛智文 期中 1. [10%] Blockchain is a distributed real-time operating system supporting trust. What OS featrues are important to the sucess of blockchain? 2. [10%] For some OS, there is an extra state, ZOMBIE, between RUNNING and TERMINATED. Why? [5%] How can we have the same function without the ZOMBIE state? 3. [10%] Rate Monotonic is an optimal periodic real-time scheduling algorithm. Describe its schedulability condition and optimality [5%] and since execution time is difficult to estimate, how do we handle the overloading problem? [5%] Hint: overloading means that some task misses its deadline. 4. [10%] Describe priority inversion and priority inheritance [5%] What is the problme of priority inheritance? how to solve it [5%] 5. [10%] When multiple threads run in parallel on a multicore system, would more kernel threads for the same task always finish earlier than using less threads? Why? 6. [15%] Describe the differences of data path when making a call on normal function, macro, system call, interrupt service routine, and context switch. [10%] Any difference in a virtualized environment? [5%] 7. [15%] For the dining philosophy solution using monitor, prove it is no problem of mutual exclusion, progress, and bounded wating. 8. [15%] For the following codes related in Linux kernel list structure,describe how an entry of list of msg_sender can be accessed,especially what the two 0's mean! [10%] Why a prefetch statement previously included in list_for_each_entry macro was removed? [5%] ```c= #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) #define list_entry(ptr, type, member) \ container_of(ptr, type, member) #define container_of(ptr, type, member) ({ \ const typeof( ((type *)0)->member ) *__mptr = (ptr); \ (type *)( (char *)__mptr - offsetof(type, member) );}) #define list_for_each_entry(pos, head, member) \ for (pos = list_entry((head)->next, typeof(*pos), member); \ &pos->member != (head); \ pos = list_entry(pos->member.next, typeof(*pos), member)) struct list_head {struct list_head *next, *prev;}; struct msg_sender { struct list_head list; struct task_struct *tsk; }; struct list_head *tmp; struct msg_sender *mss; mss = list_entry(tmp, struct msg_sender, list); ``` 9. [15%] For the following codes implementing critical region **region** v **when** B **do** S, where (a) uses 2 delay semaphores and (b) and (c\) use only one. ![](https://i.imgur.com/e8QxPdS.png) (from left to right, (a), (b), (c )) Describe why (a) can avoid busy checking B?[5%], what are the problems of implementation(b)[5%] and ( c) [5%]? --- ## 2017 薛智文期中 1. [10%] In IoT era, we connect many things together. Therefore, many different OSes for different things need to work together. What OS features are important to the success of IoT solution providers? 2. [10%] For some OS, there is an extra state, ZOMBIE, between RUNNING and TERMINATED. Why? [5%] How can we have the same function without the ZOMBIE state? [5%] 3. [10%] Earliest Deadline First is an optimal periodic real-time scheduling algorithm. Describe tis schedulability condition and optimality [5%] and since execution time is difficult to estimate, how do we handle the overloading problem? [5%] Hint:overloading means task utilization > 100% 4. [10%] Describe how we implement an unbounded buffer[5%] When to use it ?[5%] 5. [10%] When multiple threads run in parallel on a mulicore system, would more kernel threads for the same task always finish earlier than using less threads? Why? 6. [15%] Describe the differences of data path when making a call on normal function, macro, system call, interrupt service routine, and context switch. [10%] Any difference in a virtualized environment? [5%] 7. [15%] For the dining philosophy solution using monitor, prove it is no problem of mutual exclusion, progress, and bounded waiting. 8. [15%] For the following codes related in Linux kernel list structure,describe how an entry of list of msg_sender can be accessed,especially what the two 0's mean! [10%] Why a prefetch statement previously included in list_for_each_entry macro was removed? [5%] ```c= #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) #define list_entry(ptr, type, member) \ container_of(ptr, type, member) #define container_of(ptr, type, member) ({ \ const typeof( ((type *)0)->member ) *__mptr = (ptr); \ (type *)( (char *)__mptr - offsetof(type, member) );}) #define list_for_each_entry(pos, head, member) \ for (pos = list_entry((head)->next, typeof(*pos), member); \ &pos->member != (head); \ pos = list_entry(pos->member.next, typeof(*pos), member)) struct list_head {struct list_head *next, *prev;}; struct msg_sender { struct list_head list; struct task_struct *tsk; }; struct list_head *tmp; struct msg_sender *mss; mss = list_entry(tmp, struct msg_sender, list); ``` 9. [15%] For the following codes implementing critical region **region** v **when** B **do** S, where (a) uses 2 delay semaphores and (b) and (c) use only one. ![](https://i.imgur.com/e8QxPdS.png) (from left to right, (a), (b), (c )) Describe why (a) can avoid busy checking B?[5%], what are the problems of implementation(b)[5%] and ( c) [5%]? - - - - - - - User Threads Kernel Threads Def: Thread Management is supported by the "Thread Library" at user-Level Def: Thread管理是由kernel所負責 優點: Thread之間的scheduling、context switching等,不需kernel介入 ∴Fast context switching and creation 缺點: Slower context switching and creation user Thread管理成本較低 kernel Thread較貴 kernel不知道user Threads存在 kernel掌握所有Threads之管理 缺點: 當user Thread發出Blocking System call,且kernel為single-Threaded(process-oriented) 會導致整個process亦被Blocked 優點: Thread發Blocking System call,不會使整個process Blocked 缺點: 無法有效利用Multiprocessor之效益 優點: 可以安排不同Threads在不同CPUs上平行執行,有效利用Multiprocessors效益 eg. • POSIX Pthreads • Solaris 2 UI-threads、Green threads • Mach C-threads eg. • Windows NT • Windows 2000 • Solaris 2 • Tru64 UNIX