Chinococo
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Chapter 1 Introduction ## Computer System Srtucture <img src="https://hackmd.io/_uploads/Hk3OslxzA.png" width="400" height="auto"> 1. Hardware(硬體) provides basic computing resources etc:CPU, memory, I/O devices 2. Operating system(操作系統) Controls and coordinates use of hardware among various applications and users 3. Application programs define the ways in which system resources are used to solve the computing problems of users Word processors, compilers, web browsers, database systems, video games 4. Users People, machines, other computers ## Interrupt Handling(中斷管理) ### Polling(輪詢) 逐個檢查不同資源是否有中斷服務 ### Vectored Interrupt system(向量終止系統) 系統能夠利用向量,定義哪裡發生問題 ## I/O Structure ### 同步(Synchronous) 在I/O开始后,只有在I/O完成后,控制权才会返回给用户程序。 1. Wait instruction idles the CPU until the next interrupt 2. Wait loop (contention for memory access) 3. At most one I/O request is outstanding at a time, no simultaneous I/O processing ### 異步(Asynchronous) 在I/O开始后,控制權会立即返回给用户程序,而不必等待I/O完成。 1. System call – request to the OS to allow user to wait for I/O completion 2. Device-status table contains entry for each I/O device indicating its type, address, and state 3. OS indexes into I/O device table to determine device status and to modify table entry to include interrupt ## Process Management > 我相信看的人有學過組語 不解釋 Single-threaded process has one program counter specifying location of the next instruction to execute Process executes instructions sequentially, one at a time, until completion Multi-threaded process has one program counter per thread Typically system has many processes, (some user, some OS) running concurrently on one or more CPUs Concurrency by multiplexing the CPUs among the processes / threads ## Dual-core Design <img src="https://hackmd.io/_uploads/B1OSCxlfA.png" width="400" height="auto"> ## 1.16 > How does the CPU interface with the device to coordinate the transfer? CPU 可以直接寫入數值到特殊寄存器啟動DMA,可以獨立被設備訪問 > How does the CPU know when the memory operations are complete? 當DMA完成工作,DMA控制器會通知CPU進行中斷,CPU根據回傳數據進行後處理 > The CPU is allowed to execute other programs while the DMA controller is transferring data. Does this process interfere with the execution of the user programs? If so, describe what forms of interference are caused Both the device and the CPU can be accessing memory simultaneously. The memory controller provides access to the memory bus in a fair manner to these two entities. A CPU might therefore be unable to issue memory operations at peak speeds since it has to compete with the device in order to obtain access to the memory bus. # Chapter 2 Operating-System Services ## System Call 定義:幫助執行中的 user process 與 kernel 之間溝通,當 user peocess 需要OS提供某種服務時,會先以 Trap 通知 OS (由 User mode 轉為 Kernel mode) 並傳入System call ID (no.) 及所需參數,接著 OS 執行對應的 System call,完成後將服務結果回傳至 User process。 ### 參數(Parameter) 傳遞方式 1. 暫存器( Register ):保存參數於暫存器之中傳遞給作業系統。 2. Pros:簡單、存取速度快( 無記憶體存取 )。 3. Cons:不適用於存取大量參數的情況。 2. 記憶體( Memory ):以一個區塊( Block, Table )儲存這些參數並將此區塊的起始位址存於一個暫存器之中傳遞給作業系統。 3. Pros:適用於大量參數。 4. Cons:存取速度較慢,且操作較為麻煩。 3. 系統堆疊( Stack ):將參數 Push 堆疊之中,作業系統再從堆疊 Pop 取得參數。 4. Pros:也適用大量參數的傳遞,操作也很簡單。 5. Cons:目前無( 可用暫存器或是記憶體實現堆疊 )。 ### 種類 1. Process Control:建立、終止、暫停、恢復執行 process,設定/讀取 process attribute。 2. File Management:建立、讀、寫、開啟、關閉、刪除檔案。 3. Device Management:讀、寫 Device。 4. Information of Maintenance:取得系統日期、時間,取得process屬性。 5. Communications:process之間的通訊,且只針對Massage Passing方式提供服務。 6. Protection:硬體資源的保護,檔案讀取控制。 ## Role of the linker and loder <img src="https://hackmd.io/_uploads/H1lFmGxM0.png" width="400" height="auto"> ## Kernel ![image](https://hackmd.io/_uploads/HkRPVGgf0.png) # Chapter 3 Processes ## Memory <img src="https://hackmd.io/_uploads/Byqrz-ezR.png" width="400" height="auto"> 1. Stack containing temporary data Function parameters, return addresses, local variables 2. Data section containing global variables 3. Heap containing memory dynamically allocated during run time ## Process State <img src="https://hackmd.io/_uploads/r1WOzbgGA.png" width="400" height="auto"> ## Process Scheduling <img src="https://hackmd.io/_uploads/Hy4J7bxMA.png" width="400" height="auto"> ## Process Creation ![image](https://hackmd.io/_uploads/SkA6UzefA.png) ### example code ```c #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main() { pid_t pid; // Fork a child process pid = fork(); if (pid < 0) { // Fork failed fprintf(stderr, "Fork failed\n"); return 1; } else if (pid == 0) { // Child process printf("Child process executing, PID: %d\n", getpid()); // Child process can do its own tasks here exit(0); // Exit the child process } else { // Parent process printf("Parent process executing, PID: %d\n", getpid()); // Parent process can do its own tasks here wait(NULL); // Wait for the child process to finish printf("Child process finished\n"); } return 0; } ``` ### Shared Memory ```c #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #define BUFFER_SIZE 10 typedef struct { // Define the structure of 'item' int data; // Example: 'data' is an integer, you can adjust it according to your needs // Add more fields if needed } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; void produce_item(item newItem) { // Produce an item and add it to the buffer buffer[in] = newItem; in = (in + 1) % BUFFER_SIZE; // Increment 'in' index with wraparound } item consume_item() { // Consume an item from the buffer item consumedItem = buffer[out]; out = (out + 1) % BUFFER_SIZE; // Increment 'out' index with wraparound return consumedItem; } ``` ### Message Passing ```c #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define BUFFER_SIZE 256 int main() { int pipefd[2]; pid_t pid; char message[BUFFER_SIZE]; // Create a pipe if (pipe(pipefd) == -1) { perror("pipe"); exit(EXIT_FAILURE); } // Fork a child process pid = fork(); if (pid == -1) { perror("fork"); exit(EXIT_FAILURE); } if (pid > 0) { // Parent process close(pipefd[0]); // Close the reading end of the pipe printf("Enter a message to send to the child process: "); fgets(message, BUFFER_SIZE, stdin); // Write the message to the pipe write(pipefd[1], message, strlen(message) + 1); close(pipefd[1]); // Close the writing end of the pipe } else { // Child process close(pipefd[1]); // Close the writing end of the pipe // Read the message from the pipe read(pipefd[0], message, BUFFER_SIZE); printf("Message received by child process: %s", message); close(pipefd[0]); // Close the reading end of the pipe } return 0; } ``` ### POSIX ```c #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <fcntl.h> #include <sys/stat.h> #include <mqueue.h> #define QUEUE_NAME "/my_queue" #define MAX_MSG_SIZE 256 int main() { mqd_t mq; struct mq_attr attr; char buffer[MAX_MSG_SIZE]; int msg_len; // Set up the attributes for the message queue attr.mq_flags = 0; attr.mq_maxmsg = 10; // Maximum number of messages in queue attr.mq_msgsize = MAX_MSG_SIZE; // Maximum message size attr.mq_curmsgs = 0; // Create or open the message queue mq = mq_open(QUEUE_NAME, O_CREAT | O_RDWR, 0666, &attr); if (mq == (mqd_t)-1) { perror("mq_open"); exit(EXIT_FAILURE); } pid_t pid = fork(); if (pid == -1) { perror("fork"); exit(EXIT_FAILURE); } if (pid > 0) { // Parent process printf("Enter a message to send to the child process: "); fgets(buffer, MAX_MSG_SIZE, stdin); msg_len = strlen(buffer); // Send message to the queue if (mq_send(mq, buffer, msg_len, 0) == -1) { perror("mq_send"); exit(EXIT_FAILURE); } printf("Message sent to child process.\n"); } else { // Child process // Receive message from the queue ssize_t bytes_read = mq_receive(mq, buffer, MAX_MSG_SIZE, NULL); if (bytes_read == -1) { perror("mq_receive"); exit(EXIT_FAILURE); } printf("Message received by child process: %s", buffer); } // Close the message queue mq_close(mq); // Remove the message queue mq_unlink(QUEUE_NAME); return 0; } ``` ### Match Message ```c #include <stdio.h> #include <mach/mach.h> #include <mach/message.h> #include <unistd.h> #include <stdlib.h> // Define a port for the server mach_port_t server_port; // Function to handle incoming messages void handle_messages() { kern_return_t kr; mach_msg_header_t *msg; mach_msg_return_t mr; mach_msg_format_0_t *recv_msg; // Allocate memory for the received message msg = malloc(sizeof(mach_msg_header_t) + MAX_TRAILER_SIZE); if (!msg) { fprintf(stderr, "Failed to allocate memory for message\n"); exit(EXIT_FAILURE); } // Loop to continuously handle incoming messages while (1) { // Receive the message mr = mach_msg(msg, MACH_RCV_MSG, 0, sizeof(mach_msg_format_0_t), server_port, MACH_MSG_TIMEOUT_NONE, MACH_PORT_NULL); if (mr != MACH_MSG_SUCCESS) { fprintf(stderr, "Failed to receive message: %s\n", mach_error_string(mr)); continue; } // Cast the received message to the appropriate type recv_msg = (mach_msg_format_0_t *)msg; // Process the message (in this example, just print it) printf("Received message: %s\n", recv_msg->message); // Clean up the message kr = mach_msg_destroy(msg); if (kr != KERN_SUCCESS) { fprintf(stderr, "Failed to destroy message: %s\n", mach_error_string(kr)); } } // Free allocated memory free(msg); } int main() { kern_return_t kr; // Create a port for the server kr = mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_RECEIVE, &server_port); if (kr != KERN_SUCCESS) { fprintf(stderr, "Failed to create server port: %s\n", mach_error_string(kr)); exit(EXIT_FAILURE); } // Name the port (optional) kr = mach_port_insert_right(mach_task_self(), server_port, server_port, MACH_MSG_TYPE_MAKE_SEND); if (kr != KERN_SUCCESS) { fprintf(stderr, "Failed to name server port: %s\n", mach_error_string(kr)); exit(EXIT_FAILURE); } printf("Server started. Listening for messages...\n"); // Call the function to handle incoming messages handle_messages(); // Destroy the port kr = mach_port_destroy(mach_task_self(), server_port); if (kr != KERN_SUCCESS) { fprintf(stderr, "Failed to destroy server port: %s\n", mach_error_string(kr)); } return 0; } ``` ## Preemptive(搶佔式) vs. Non-preemptive(非搶佔式) CPU scheduling 的決定時機主要有以下幾個 1. 從 running 換到 waiting state 2. 從 running 換到 ready state (CPU burst 被打斷,time sharing) 3. 從 waiting 到 ready state 4. 終止狀態 ### Non-preemptive scheduling: 若一個 process 可以繼續執行(在 CPU burst),則不會被打斷 1. 因此只會在上面 1. 跟 4. 的情況下作 re-scheduling ### Preemptive scheduling 在所有情況都有可能 re-sheduling # Chapter 5: CPU Scheduling ## 指標 1. CPU的使用效率(CPU utilization) —越高越好 2. 單位時間內完成的工作量(throughput) —越多越好 3. Process進入CPU到完成的時間(turnaround time) —越短越好 4. Process被叫進ready queue到完成時,等了多久(waiting time) —越短越好 5. 當有需求時,多久能給回覆(response time) —越短越好 ### Algorithms #### FCFS 依照抵達順序執行(先來的先執行) ### SJF #### Preemptive Version 當有新的行程且他的 CPU burst 的長度比較小,搶佔發生 #### Non-preemptive Version 永遠不會發生搶佔,每個執行完後,找尋brust時間最少當作下一個 ### Priority scheduling 依據優先權來排順序,以整數代表優先權,數字越小優先權越高。 ### RR(Round Robin) 給每個process一個固定的時間(time quantum ,q)執行,如果執行不完就再去排隊。如果有n個process在ready queue,每個process有1/n CPU時間,一定不會等超過(n-1)q的時間。以下有例圖,q=4: ![image](https://hackmd.io/_uploads/HycS2MgzC.png) ### Multilevel Queue 這種演算法將ready queue分成兩個:Foreground(需要交談的)、Background(可以批次的)。而這兩種queue有各自運用的演算法。而在排程時,這兩種queue之間要先做優先順序的排程、CPU時間的分配。 Foreground:RR演算法、高優先、80%的CPU時間 Background:FCFS演算法、低優先、20%的CPU時間 ### Multilevel feedback queue 因為Multilevel queue的順序是固定的,有時還是需要調整一下順序,所以產生Multilevel feedback queue演算法,所以他是一種動態的排程機制。但這種演算法需要考量: 有幾個queue 每個queue的演算法 什麼時候要把順序往前調 什麼時候要把順序往後調 queue的選擇 ## real-time scheduling ### Rate Monotonic Scheduling: 是最常用的演算法,依據頻率高低決定優先權,週期短(頻率高)有高優先權,週期長(頻率低)較低優先權。以下為例題跟運算: ![20112132qQhtFy89N3](https://hackmd.io/_uploads/Sk8G6MlMA.png) ### Earliest Deadline First Scheduling: 依據誰的deadline先到,誰的優先權就越高。以下有範例即運算: ![20112132s2QiplDOi0](https://hackmd.io/_uploads/BJKbpMxGC.png) ### Deterministic modeling 是以分析的方式,選擇哪種演算法適合。但是我們沒有那麼多時間,可以用運算waiting time的時間做分析。 ### Queueing modeling Little’s Law ![image](https://hackmd.io/_uploads/H1e86flzA.png) ## NUMA and CPU Scheduling ![image](https://hackmd.io/_uploads/ryRi2fxfC.png) # Chapter 6 Synchronization Tools(同步工具) Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the process that will enter the critical section next cannot be postponed indefinitely Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the n processes ## Tool mutex locks, semaphores, monitors ### Mutex ![image](https://hackmd.io/_uploads/HybFMweGC.png) ### Semaphore ![image](https://hackmd.io/_uploads/H1V5fDgG0.png) ### monitors ![image](https://hackmd.io/_uploads/HkeAzPefR.png) # Chapter 7 ## Bounded-Buffer Problem: 假設有n個buffers,每個都有資料,三個semaphore預設值為mutex=1、full=0、empty=n Producer要將資料放進去,以下是他的結構圖: ![image](https://hackmd.io/_uploads/H17L7PxMR.png) ![image](https://hackmd.io/_uploads/H15IQDezR.png) ## Readers and Writers Problem: 有一個data set,有很多process同時共享著。 Readers:只能讀data set,不能寫,可以同時有多個readers Writers:可讀可寫data set,但同時只能有一個writer取得 Writer和reader考慮的幾種變化,都涉及某種形式的優先事項。 ![image](https://hackmd.io/_uploads/rkOOXvgfR.png) ![image](https://hackmd.io/_uploads/SJmtQwxf0.png) ## Dining-Philosophers Problem 五個哲學家要吃飯,桌上有五支筷子,而他們只會思考跟吃飯而已,而且不跟隔壁交談。以下有一個架構,這個會造成deadlock,因為大家都拿起右邊的筷子,這樣大家都不能吃 ![image](https://hackmd.io/_uploads/SkusXDxzR.png) ![image](https://hackmd.io/_uploads/SJ5hQDefC.png) 要解決這個deadlock,有幾個解決辦法: 4個人上餐桌 左右筷子都空下來才能吃 非對稱式方法:奇數先取左邊再取右邊,偶數先取右邊再取左邊 用semaphore解決問題,但他也不保證不會出現deadlock和starvation。 ## Montiter sloution ![image](https://hackmd.io/_uploads/rJexVvezC.png) ## Deadlock - A situation in which **every** process in a set of processes is **waiting** for an event that can be caused only by another process in the set :::success **Ex.** - 2 processes and 2 tape drivers - Each process holds a tape drive - Each process requests another tape drive ::: ## System Model A **thread** may use a resource in the following sequence - **Request** - 要求分配某些資源 - **Use** - 使用分配到的資源 - **Release** - 釋出分配到的資源 --- - Resources types $R_1,R_2,...R_m$ - E.g., CPU, Disk, I/O devices, ... - Each resource type $R_i$ has $W$ instances - E.g, a computer has 4 CPU cores, 1 networking interface ... ## Deadlock in Multithreaded Applications ```c /* thread_one runs in this function */ void *do_work_one(void *param) { pthread_mutex_lock(&first_mutex); pthread_mutex_lock(&second_mutex); /** * Do some work */ pthread_mutex_unlock(&second_mutex); pthread_mutex_unlock(&first_mutex); pthread_exit(0); } /* thread_two runs in this function */ void *do_work_two(void *param) { pthread_mutex_lock(&second_mutex); pthread_mutex_lock(&first_mutex); /** * Do some work */ pthread_mutex_unlock(&first_mutex); pthread_mutex_unlock(&second_mutex); pthread_exit(0); } ``` - 若 work 1 等到 `second_mutex` 前, work 2 已取得 `second_mutex`,就會產生 deadlock - work1 持有 `first_mutex`,並等待 work 2 持有的 `second_mutex`,但 work2 也在等 work1 的 `first_mutex` - work2 需要 work1 釋出 `first_mutex`,才能釋出 `second_mutex`,但同時 `second_mutex` 要先被釋出 work1 才會釋出 `first_mutex` ### Live lock ```c /* thread_one runs in this function */ void *do_work_one(void *param) { int done = 0; while (!done) { pthread_mutex_lock(&first_mutex); if (pthread_mutex_trylock(&second_mutex)) { /** * Do some work */ pthread_mutex_unlock(&second_mutex); pthread_mutex_unlock(&first_mutex); done = 1; } else { pthread_mutex_unlock(&first_mutex); } } pthread_exit(0); } /* thread_two runs in this function */ void *do_work_two(void *param) { int done = 0; while (!done) { pthread_mutex_lock(&second_mutex); if (pthread_mutex_trylock(&first_mutex)) { /** * Do some work */ pthread_mutex_unlock(&first_mutex); pthread_mutex_unlock(&second_mutex); done = 1; } else { pthread_mutex_unlock(&second_mutex); } } pthread_exit(0); } ``` ## Deadlock Characterization ### Necessary Conditions - A deadlock can arise when four conditions hold simultaneously - **Mutual Exclusion** - Only 1 process at a time can use a resource - 一個 resource 一次只能被一個 process 使用 - **Hold** & **wait** - A process holding some resources and is waiting for another resource - Process 至少取得一個 resource,然後等其他 process 握住的 resource 釋出 - **No preemption** - a resource can be only released by a process voluntarily - 釋放資源時,是 process 自願的,沒有被中斷 - **Circular wait** - there exists a set $\{P_0,P_1,...,P_n\}$ of waiting processes such that $P_0 \rightarrow P_1 \rightarrow ... \rightarrow P_n \rightarrow P_0$ - 每個 process 都在等另一個 process 握住的 resource 釋放,這也代表 single process 不會有 deadlock :::warning **All 4 conditions** must hold for **possible deadlock** (要形成deadlock,必須同時滿足 4 個條件) ::: ### Resource-Allocation Graph - System resource-allocation graph - Request edge - Assignment edge ![image](https://hackmd.io/_uploads/BJfDQh-wp.png) ![image](https://hackmd.io/_uploads/S1_OX3bwT.png) ![image](https://hackmd.io/_uploads/rJVKQnWPT.png) ![image](https://hackmd.io/_uploads/SyY57hWPT.png) ## Methods for Handling Deadlocks - Deal with the deadlock problem in one of three way - Ignore the problem - Use a protocol to **prevent** or **avoid** deadlocks, ensuring that the system will never enter a deadlock state - Allow the system to enter a deadlock state, **detect** it, and **recover** ## Deadlock prevention - For a deadlock to occur, each of the four conditions necessary conditions must hold - By ensuring that **at least one of these conditions cannot hold**, we can prevent the deadlock - 設計時就不讓 deadlock 產生 ### Mutual exclusion - sharable resources - Non-sharable resources - 對**不可共用**的資源類型(例如寫檔)而言,**互斥一定成立** - 而可共用的資源類型(例如讀檔,因為可以同時讀取相同檔案),一定不會產生 ### Hold and wait - Process 必須保證在要求一項資源時,不能佔用其他資源,除非他可以一次取得所有資源 - Require each process to request and be allocated all its resources before it begins execution; or, allows a process to request resources only when it has none - Disadvantages - Low resource utilization - Starvation is possible ### No Preemption - 只要 process 握著一些資源,但得不到其他的,就要先全部釋放,再重新申請 - If a process is holding some resources and requests another resources that cannot be immediately allocated to it, then all resources the process is currently holding are preempted ### Circular wait - 對 resource type 強制安排線性順序,不讓 circular wait 的條件達成 - Impose a total ordering of all resource types and to require that each process requests resources in an increasing order of enumeration ```c void transaction(Account from, Account to, double amount) { mutex lock1, lock2; lock1 = get_lock(from); lock2 = get_lock(to); acquire(lock1); acquire(lock2); withdraw(from, amount); deposit(to, amount); release(lock2); release(lock1); } ``` # Deadlock Avoidence ## Safe state - Safe sequence ![image](https://hackmd.io/_uploads/ByVsw3ZPp.png) ### Resource-Allocation Graph Algorithm - Single instance of each resource type - Use a variant of the resource-allocation graph - 以下圖為例,方框表示 resource、圓形表示 process - 從 process 指向 resource 的虛線箭頭: 表示 process 想要該資源 - 從 resource 指向 process 的實線箭頭: 表示資源被分配給該 process ![image](https://hackmd.io/_uploads/HkKxOhZvT.png) ![image](https://hackmd.io/_uploads/HyE-d2-D6.png) ## Banker's Algorithm - Data structures - Available: 目前系統中可用的閒置資源 - Max: 某個 process 要完成執行,需要的資源數量 - Allocation: 某個 process 已分配到的資源數量 - Need: Max - Allocation,即某 process 還需要多少資源以完成執行 <!-- example to do --> ## Deadlock Detection ### Single Instance of Each Resource Type - Wait for graph ![image](https://hackmd.io/_uploads/Bkj2jhZP6.png) ### Several Instances of a Resource Type - Deadlock detection algorithm - Data structure - Available - Allocation - Request ### Detection-Algorithm Usage - How often is a deadlock likely to occur? - How many processes will be affected by a deadlock when it happens? ## Recovery from Deadlock ### Process and Thread Termination - Abort **all** deadlocked processes - Abort **one** process at a time **until the deadlock cycle is eliminated** --- - The factors may affect which process is chosen - What the **priority** of the process is? - **How long** process has computed and **how much longer** the process will compute before completing its designated task? - How many and what types of **resources** the process has used? - **How many more resources** the process needs to complete? - **How many** processes will **need to be terminated**? - Whether the process is **interactive or batch**? - 停止 batch process 使用者比較無感 - QoE (Quality of Experience) ### Resource Preemption - Three issues need to be addressed - Selecting a victim - 選擇哪個 process 要被搶佔 (preempted)。 - 確保搶佔的順序以最小化成本。 - Rollback - 從某個 process 搶佔資源可能會使其無法正常執行,必須 rollback 回到某個安全的狀態再繼續。 - 最簡單的 rollback 是直接中止這個 process;較有效率的方法是僅在必要時打破 deadlock,但系統需要保存更多關於每個 process 的 state 的資訊。 - Starvation - 如何保證資源不會總是被同一個 process 搶佔? # Chapter 9 ## Background ### Basic Hardware - Instruction execution cycle - Stall - Cache ![image](https://hackmd.io/_uploads/BklGv7d086.png) - Base register - Limit register - 這兩個暫存器定義 logical address space,分別記錄 process 起始記憶體位置 (base register) 跟 process 所占記憶體位置大小 (limit register)。 ![image](https://hackmd.io/_uploads/Bk7hX_AI6.png) ### Address Binding ![image](https://hackmd.io/_uploads/SJzH4_0Up.png) - Bind - The binding of instructions and data to memory addresses **can be done at any step** - Compile time - 由 compiler 決定 - 如果知道 compile 時 process 在 memory 的位置,可以產生 **absolute address**。 - 假設知道 process 會在 location *R*,則 compile 出來的 code 會從這個位置開始 extend up。 - Load time - 由 linking loader 或 linker editor 決定 - 編譯時無法確認,則生成 **relocation address** (不一定由固定位置執行) <!-- TODO --> - Execution time - 由 OS 動態決定 - 如果記憶體區段要執行時被移動,連結才會延到這時 <!-- TODO --> ### Logical Versus Physical Address Space - Logical Address - Logical Address Space - CPU 所產生的位置,又叫做 virtual address - Physical Address - Physical Address Space - 記憶體看到的位置 (經過 MMU 處理過) - <font class='red'>MMU (Memory Management Unit)</font> - 可將虛擬位置轉換成實體位置的硬體裝置 ![image](https://hackmd.io/_uploads/r1m8DdR86.png) - Relocation register - Base register 在這被稱為 relocation register ![image](https://hackmd.io/_uploads/rJVuwdR8a.png) ### Dynamic Loading - Dynamic Loading ### Dynamic Linking and Shared Libraries - Dynamically linked libraries (DLLs) ![image](https://hackmd.io/_uploads/B1pPdOR8a.png) - Static linking - Dynamic linking ## Contiguous Memory Allocation - Two partitions of memory - Operating system - User processes - Contiguous memory allocation ### Memory Protection ![image](https://hackmd.io/_uploads/HJEkt_AU6.png) ### Memory Allocation - Variable-partition scheme - Hole - Solutions to dynamic storage-allocation problem - First fit - 第一個容納得下就直接使用 - Best fit - 看完所有記憶體空間,選一個夠大,而且最接近自己大小的使用。最後可能會生成許多很小的記憶體空間。 - Worst fit - 看完所有記憶體空間,選一個最大的使用。最後可能會生成很大的記憶體空間。 ## Fragmentation - External fragmentation - 剩下小塊小塊的記憶體空間,無法使用 - Internal fragmentation - 記憶體空間給過大使用時, process 用不到,外面需要使用的也用不到 - Compaction - 解決 fragmentation 的方法,將 process 移來移去,讓其餘的記憶體空間能夠聚在一起,但是 process 必須是 dynamic 才可以這樣做 ## Paging - Paging 是為了解決 fragmentation (external fragementation 可以解決,但 internal fragmentation 還是會存在) 跟處理很多不同大小的 process。 ### Basic Method - Frames & pages - **Frames**: 將實體記憶體 (physical memory) 切割成固定大小的 block - **Pages**: 將邏輯記憶體 (logical memory) 分成大小相同的 block - Frames 跟 pages 的大小相同,當 program 需要 n 個 pages 時, OS 會去 physical memory 找 n 個 free frames 來放,frame 並不需要連續 - Page number (`p`) - Page offset (`d`) - Page table - OS 會設置 page table,轉換 logical address 變成 physical address。 ![image](https://hackmd.io/_uploads/ryR_Ad0Lp.png) ![image](https://hackmd.io/_uploads/Bki7R_C8T.png) ![image](https://hackmd.io/_uploads/SkmSCdRIT.png) ![image](https://hackmd.io/_uploads/SkEvAdAL6.png) ### Hardware Support ![image](https://hackmd.io/_uploads/rJe9ktAUa.png) - <font class='red'>Translation Look-aside Buffer</font> (類似 cache 的概念) - Parallel search - TLB miss - TLB 找不到的話就回去找 page table - **有找到**:1 次 memory access;**找不到**:2 次 memory access (page table 算 memory) - Hit ratio - Effective memory access time ### Protection - Valid-invalid bit ![image](https://hackmd.io/_uploads/Sy9ESFA8a.png) ### Shared Pages - Reentrant code - 每次看都一樣(不能改內容) ![image](https://hackmd.io/_uploads/S1o9rtC8a.png) ## Structure of the Page Table ### Hierarchical Paging ![image](https://hackmd.io/_uploads/BJSk8tA86.png) ![image](https://hackmd.io/_uploads/S16-LYC8T.png) ### Hashed Page Tables ![image](https://hackmd.io/_uploads/HJiYLK0Ua.png) ### Inverted Page Tables ![image](https://hackmd.io/_uploads/Syo28KCLT.png) - <process-id, page-number, offset> ## Swapping - Backing store (disk) - 在程式執行時,process 有時會需要暫時離開記憶體,之後會再搬回來執行,這就叫做 swapping,搬上搬下的動作我們稱為 **roll out** 跟 **roll in**。而在這裡的硬碟 (disk) 我們會將它稱作 backing store,兩者是互通的。 ### Standard Swapping - Standard swapping involves moving processes between main memory and a backing store ![image](https://hackmd.io/_uploads/rkFCvzQD6.png) ### Swapping with Paging - Page out - Page in ![image](https://hackmd.io/_uploads/BJWOuMmv6.png) ### Swapping on Mobile Systems - Mobile systems typically do not support swapping in any form # Chapter 10 ### Virtual memory * separation of user logical memory from physical memory ### Virtual address space * logical view of how process is stored in memory ![](https://i.imgur.com/nx9pjuF.png =70%x) ## P.8 Virtual-address Space ## P.9 ## P.10 Demand Paging ![](https://i.imgur.com/5OymzFT.png =60%x) * Bring a page into memory only when it is needed ## P.11 ## P.12~13 Valid-Invalid Bit ![](https://i.imgur.com/6hICEec.png =40%x)![](https://i.imgur.com/9qvdrlB.png =60%x) ## P.14,15 Page Fault ![](https://i.imgur.com/J1TMIHI.png =60%x) ## P.16 Aspects of Demand Paging * Locality of reference Avoid multiple page faults ## P.17~20 Demand Paging * **Effective Access Time**(EAT) $EAT = (1 – p) \times \text{memory access} \\ ~~~~~~~~~~~~~+ p \times (\text{page fault overhead} + \text{swap page out} + \text{swap page in} )$ ## P.22~24 Copy-on-Write * Allows both parent and child processes to initially **share** the same pages in memory. ![](https://i.imgur.com/suEbQzq.png =60%x) ## P.26~29 Page Replacement * Prevent over-allocation of memory * Use **modify (dirty) bit** to reduce overhead of page transfers ### Replacement * Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame - Write victim frame to disk if dirty ![](https://i.imgur.com/qMntT7V.png) ## P.30 Page and Frame Replacement Algorithms ## P.31~43 Page-replacement algorithm ### FIFO Algorithm ![](https://i.imgur.com/yRSTvqc.png =80%x) * **Belady’s Anomaly** Adding more frames can cause more page faults $e.g.$ 1,2,3,4,1,2,5,1,2,3,4,5 ### Optimal Algorithm * Replace page that will not be used for **longest period of time** * Not that possible to implement ### Least Recently Used (LRU) Algorithm ![](https://i.imgur.com/jqGZ3AF.png =80%x) # HW1 ## Written Exercises 解答來源自助教 ### 1.16 直接記憶體存取用於高速 I/O 裝置,以避免增加 CPU 的執行負荷。 **(a) CPU 如何與裝置連接以協調傳輸?** > CPU 可透過寫入特殊暫存器來啟動 DMA 操作,裝置可獨立存取這些暫存器。 一旦收到 CPU 的命令,設備就會啟動相應的操作。 **(b) CPU 如何知道記憶體操作何時完成?** > 當設備完成操作時,它會中斷 CPU 以指示操作完成。 **\(c) 在 DMA 控制器傳輸數據時,允許 CPU 執行其他程式。這一過程是否會干擾使用者程式的執行?如果是,請描述會造成那些形式的干擾。** > 設備和 CPU 可同時存取記憶體。 記憶體控制器會以公平的方式讓這兩個實體存取記憶體匯流排。 因此,CPU 可能無法以最高速度執行記憶體操作,因為它必須與裝置競爭才能獲得記憶體匯流排的存取權。 ### 2.15 **進程間通訊有哪兩種模式? 這兩種方法的優缺點是什麼?** > 進程間通訊的兩種模式是訊息傳遞模式和共享記憶體模式。 訊息傳遞適用於交換少量數據,因為無需避免衝突。 在電腦間通訊方面,它也比共享記憶體更容易實現。 共享記憶體可以最大限度地提高通訊速度和便利性,因為在電腦內部進行通訊時,可以以記憶體傳輸速度進行。 不過,這種方法在共享記憶體的進程之間的保護和同步方面有所妥協。 ### 2.19 **微核心系統設計方法的主要優點是什麼? 在微核心架構中,使用者程式和系統服務如何互動? 使用微內核方法有哪些缺點?** > 其優點通常包括以下幾點: (a) 新增服務不需要修改核心;(b) 在使用者模式下進行的操作比在核心模式下進行的操作更多,因此更安全;\(c) 核心設計和 功能更簡單,因此作業系統通常更可靠。 在微核心架構中,使用者程式和系統服務透過使用進程間通訊機制(如訊息傳遞)進行互動。 這些資訊由作業系統傳遞。 微核心架構的主要缺點是與進程間通訊相關的開銷,以及需要頻繁使用作業系統的訊息傳遞功能才能實現使用者進程和系統服務之間的互動。 ### 3.12 **描述核心在進程間進行上下文切換時所採取的動作。** > 一般來說,作業系統必須保存目前運行進程的狀態,並恢復計劃下一個運行進程的狀態。 保存進程的狀態除了記憶體分配外,通常還包括所有 CPU 暫存器的值。 上下文切換還必須執行許多特定於體系結構的操作,包括刷新資料和指令快取。 ### 3.18 **舉例說明普通管道比命名管道更合適的情況,以及命名管道比普通管道更合適的情況。** > 使用普通管道進行簡單通訊效果很好。 例如,假設我們有一個計算檔案中字元數的進程。 可以使用普通管道,生產者將文件寫入管道,消費者讀取文件併計算文件中的字元數。 接下來,我們來舉例說明命名管道更適合的情況,即多個進程可能會向日誌中寫入資訊。 當進程希望向日誌寫入訊息時,它們會將訊息寫入命名管道。 伺服器從命名管道中讀取訊息並將其寫入日誌檔案。 # HW2 ## Written Exercises 解答來源自 ChinoCoCo ### 4.8 **提供兩個程式設計範例,說明多執行緒不會比單執行緒解決方案提供更好的效能。** > 串行操作爲主的程序,例如:圖像進行處理的各個步驟之間通常是無法並行的 大量需要同步等待資源的程式 ### 4.10 **在多執行緒進程中,程式狀態的下列哪些部分是跨執行緒共享的? (a) 暫存器值 (b) 堆疊記憶體 \(c) 全域變數 (d) 堆疊記憶體** > a. Register values (寄存器/CPU) **(b.) Heap memory (動態配置變數) (c.) Global variables (全域變數)** d. Stack memory (區域變數) > **stack:區域變數** 堆疊區段(stack segment)用於儲存函數的區域變數,以及各種函數呼叫時需要儲存的資訊(例如函數返回的記憶體位址還有呼叫者函數的狀態等),每一次的函數呼叫就會在堆疊區段建立一個 stack frame,儲存該次呼叫的所有變數與狀態,這樣一來同一個函數重複被呼叫時就會有不同的 stack frame,不會互相干擾,遞迴函數就是透過這樣的機制來執行的。 **heap:動態配置變數** heap 區段的記憶體空間用於儲存動態配置的變數,例如 C 語言的 malloc 以及 C++ 的 new 所建立的變數都是儲存於此。 堆疊區段一般的狀況會從高記憶體位址往低記憶體位址成長,而 heap 剛好從對面以相反的方向成長。 <details style="border: 1px solid #ccc; padding: 10px;"> <summary style="cursor: pointer;">程式碼</summary> ```c= #include <stdio.h> #include <pthread.h> #include <stdlib.h> #define NUM_THREADS 5 // Heap過去共享資料型態 struct SharedData { int value; }; // Global variables pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; // Pthread function void *thread_function(void *arg) { struct SharedData *data = (struct SharedData *)arg; for (int i = 0; i < 3; i++) { pthread_mutex_lock(&mutex); // Lock printf("Thread: Reading shared value = %d\n", data->value); data->value++; printf("Thread: Modifying shared value to %d\n", data->value); pthread_mutex_unlock(&mutex); // UnLock } pthread_exit(NULL); } int main() { pthread_t threads[NUM_THREADS]; // malloc 就是去heap memory 拿記憶體 struct SharedData *shared_data = (struct SharedData *)malloc(sizeof(struct SharedData)); shared_data->value = 0; for (int i = 0; i < NUM_THREADS; i++) { pthread_create(&threads[i], NULL, thread_function, (void *)shared_data); } for (int i = 0; i < NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf("Final value of shared variable: %d\n", shared_data->value); free(shared_data); return 0; } ``` </details> ### 4.16 有兩個雙核心處理器的系統有四個處理器可用於調度 - 該系統正在運行一個 CPU 密集型應用程式 - 所有輸入都在程式啟動時進行,此時必須開啟一個文件 - 同樣,所有輸出都在程式結束前進行,此時必須將程式結果寫入一個文件 - 在啟動和終止之間,程式完全依賴 CPU - 你的任務是透過多執行緒來提高該程式的效能 - 該程式運行在使用一對一執行緒模型(每個使用者執行緒映射到一個核心執行緒)的系統上 1. **您將創建多少個執行緒來執行輸入和輸出? 解釋一下。** > 由於必須按順序存取文件,因此讀取和寫入文件的工作不能合理地並行化。 合理的並行化,因此每項任務只能使用一個執行緒。 2. **您將為應用程式的 CPU 密集型部分建立多少個執行緒? 解釋一下。** > 4個 題目有說 four processors ### 5.14 大多數調度演算法都會維護一個運行佇列,列出有資格在處理器上運行的進程。 在多核心系統中,一般有兩種選擇: (1) 每個處理核心都有自己的運行佇列,或 (2) 所有處理核心共用一個運行佇列。 **這兩種方法各有什麼優缺點?** > 第一種的優點在,任務不會互相干擾,也不需要等待資源,但需要更多管理以及同步,以避免資源競爭 第二種的優點在簡單方便有效率的使用系統資源,而且較好管理,但必須管控資源開銷,同步問題,可能會造成效率不佳 ### 5.18 以下進程正在使用搶佔式、基於優先權的循環調度演算法進行調度。 - 每個行程都有一個數字優先級,數字越大表示相對優先級越高。 - 對於優先權相同的進程,將使用時間量為 10 個單位的循環調度器。 - 如果一個行程被優先權較高的行程搶佔,被搶佔的程序將被置於佇列的末端。 ![image](https://hackmd.io/_uploads/rk_xqXgGC.png) **(a) 以甘特圖顯示流程的安排順序。** ![image](https://hackmd.io/_uploads/SyemgNxfC.png) **(b) 每個流程的周轉時間是多少?** ![image](https://hackmd.io/_uploads/rJ5TeVgG0.png) :::danger 注意 P6 的 arrival time 可能有錯。 ::: **\(c) 每個流程的等待時間是多少?** ![image](https://hackmd.io/_uploads/Bk0N-4xGA.png) ### 5.22 考慮一個執行十個 I/O 綁定任務和一個 CPU 綁定任務的系統。 - 假設 I/O 綁定任務每計算一毫秒 CPU 就發出一次 I/O 操作,每次 I/O 操作需要 10 毫秒才能完成。 - 也假設上下文切換開銷為 0.1 毫秒,且所有進程都是長期運行任務。 **描述循環調度程式在下列情況下的 CPU 使用率: (a) 時間量子為 1 毫秒** ![image](https://hackmd.io/_uploads/BJl5-4xGA.png) **(b) 時間量子為 10 毫秒** ![image](https://hackmd.io/_uploads/SJaAbVxGR.png) :::danger 注意算式可能有錯。 ::: ### 5.25 **解釋下列調度演算法在多大程度上有利於短進程的差異:(a) FCFS (b) RR \(c) 多層回饋佇列** > **a 先來先服務(FCFS)**:這種排程算法不太偏向於短進程,因為它按照進程到達的順序進行調度。如果有長進程在短進程之前到達,那麼短進程需要等待長進程執行完畢才能被執行,這可能導致短進程的等待時間變長。 **b 輪轉(RR)**:這種排程算法傾向於短進程,因為它使用固定的時間片來輪流執行進程。當一個時間片用完時,如果進程還沒有執行完,它會被放回隊列的末尾,而短進程通常可以在一個時間片內完成執行,因此有更多的機會被執行 **c 多級反饋隊列**:多級反饋隊列排程算法也傾向於短進程,因為它將進程按照優先級分成多個隊列,並且對於短進程使用較短的時間片,對於長進程使用較長的時間片。這意味著短進程有更高的機會被執行,因為它們會在較短的時間內完成,並且會經常被提升到較高優先級的隊列中。 ### 6.7 圖 6.15 的偽代碼說明了基於陣列的堆疊的基本 push() 和 pop() 操作。 假設該演算法可用於並發環境,請回答下列問題: ![image](https://hackmd.io/_uploads/BycfjXeGC.png) **(a) 哪些數據存在競賽條件?** > push跟POP都會影響top數據會在過程中被修改,而導致is_empty有時候調用會有競爭 **(b) 如何解決競賽條件?** > 在所有修改操作之前加上互斥鎖(mutex),以避免對堆疊頂部(top)的競爭條件。 ### 6.15 **如果同步原語要在使用者層級程式中使用,請解釋為什麼在單處理器系統中不適合透過停用中斷來實現同步原語。** > 如果要在用戶級程序中使用同步原语,通過禁用中斷來實現同步原语在單處理器系統中是不合適的。如果一個用戶級程序被賦予禁用中斷的能力,那麼它可以禁用計時器中斷並阻止上下文切換發生,從而允許它使用處理器而不讓其他進程執行。 ### 6.18 第 6.5 節中提供的互鎖的實作有忙等待的問題。 **請說明需要做出哪些修改,才能使等待取得互斥鎖的程序被阻塞並進入等待佇列,直到鎖可用。** <details style="border: 1px solid #ccc; padding: 10px;"> <summary style="cursor: pointer;">程式碼</summary> ```c= push(item, mutex) { wait(mutex); // Block and wait for mutex if (top < SIZE) { stack[top] = top++; signal(mutex); // Release mutex } else { signal(mutex); // Release mutex ERROR; } } pop(mutex) { wait(mutex); // Block and wait for mutex if (!is_empty()) { top--; int item = stack[top]; signal(mutex); // Release mutex return item; } else { signal(mutex); // Release mutex ERROR; } } is_empty(mutex) { wait(mutex); // Block and wait for mutex if (top == 0) { signal(mutex); // Release mutex return true; } else { signal(mutex); // Release mutex return false; } } ``` </details>

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully