Yu Hang Hung
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    2018q1 Homework1 (phonebook) ============================ contributed by <`hungyuhang`> ## Reviewed by `raygoah` * 在實作 hash 的部份,建議可以將每次實驗過程得到的結果,利用 gnuplot 繪製出效能的比較圖,較為方便比對及理解差異 * 可以考慮實作不同的 hash function,並進行比較 * commit message 需要增加內文敘述實際做了什麼、為什麼怎麼做,以及在做了這些之後得到的結果,可以參考[【譯】如何撰寫Git提交訊息](https://xiwan.io/archive/how-to-write-a-git-commit-message.html) * 有提到關於 hash table size 是否為 $2^n$ 時,對於 cache miss rate 以及 findName() 和 append() 的速度會有不同這點,我也滿感興趣的,而稍微查了一下,發現網路上的討論中,主要比較的重點是放在 hash table size 是質數或是 $2^n$,兩者差異在於 data 經過 hash 後,hash table size 是質數時,得到的結果會是比較分散,尤其是在 data 數量多的時候會更明顯,詳情可參考在 [這篇](https://stackoverflow.com/questions/5929878/why-is-the-size-127-prime-better-than-128-for-a-hash-table) 的討論,如有錯誤也懇請糾正,謝謝 >< ### Reviewed by `e94046165` * 關於 append()時間的問題,我認為其他人沒有遇到同樣的問題可能跟建立 hash table 的機制有關。舉例而言,當每次發剩 collision 時,我是由目標串列的前端插入新的節點(也就是插在 Header 的下一個),因此並不會遇到 hash_table_size 不夠大而使串列過長,進而導致 append必須走訪到串列末端才能插入節點的問題。 * 承上,換個角度來想,由末端插入節點並給予一個夠大的 table_size 是不合理的。這樣不只可能浪費不必要的記憶體空間,實務上提供動態加入資料的情況下,append 時間勢必拉得愈來愈長,更不用說 append 的時候走訪整個串列本來就不是必要的。 ## 開發環境 ``` hungyuhang@hungyuhang-Aspire-V5-591G:~$ lscpu Architecture: x86_64 CPU 作業模式: 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 每核心執行緒數:1 每通訊端核心數:4 Socket(s): 1 NUMA 節點: 1 供應商識別號: GenuineIntel CPU 家族: 6 型號: 94 Model name: Intel(R) Core(TM) i5-6300HQ CPU @ 2.30GHz 製程: 3 CPU MHz: 800.030 CPU max MHz: 3200.0000 CPU min MHz: 800.0000 BogoMIPS: 4608.00 虛擬: VT-x L1d 快取: 32K L1i 快取: 32K L2 快取: 256K L3 快取: 6144K NUMA node0 CPU(s): 0-3 ``` * 另外有從 [team6612的開發紀錄](https://hackmd.io/s/BJ9IL2wdM#) 參考到查詢快取詳細資訊的指令: ``` $ sudo dmidecode -t cache ``` * 執行上述指令之後可以得到以下的快取資訊: ``` # dmidecode 3.0 Getting SMBIOS data from sysfs. SMBIOS 2.8 present. Handle 0x0005, DMI type 7, 19 bytes Cache Information Socket Designation: L1 Cache Configuration: Enabled, Not Socketed, Level 1 Operational Mode: Write Back Location: Internal Installed Size: 128 kB Maximum Size: 128 kB Supported SRAM Types: Synchronous Installed SRAM Type: Synchronous Speed: Unknown Error Correction Type: Parity System Type: Data Associativity: 8-way Set-associative Handle 0x0006, DMI type 7, 19 bytes Cache Information Socket Designation: L1 Cache Configuration: Enabled, Not Socketed, Level 1 Operational Mode: Write Back Location: Internal Installed Size: 128 kB Maximum Size: 128 kB Supported SRAM Types: Synchronous Installed SRAM Type: Synchronous Speed: Unknown Error Correction Type: Parity System Type: Instruction Associativity: 8-way Set-associative Handle 0x0007, DMI type 7, 19 bytes Cache Information Socket Designation: L2 Cache Configuration: Enabled, Not Socketed, Level 2 Operational Mode: Write Back Location: Internal Installed Size: 1024 kB Maximum Size: 1024 kB Supported SRAM Types: Synchronous Installed SRAM Type: Synchronous Speed: Unknown Error Correction Type: Single-bit ECC System Type: Unified Associativity: 4-way Set-associative Handle 0x0008, DMI type 7, 19 bytes Cache Information Socket Designation: L3 Cache Configuration: Enabled, Not Socketed, Level 3 Operational Mode: Write Back Location: Internal Installed Size: 6144 kB Maximum Size: 6144 kB Supported SRAM Types: Synchronous Installed SRAM Type: Synchronous Speed: Unknown Error Correction Type: Multi-bit ECC System Type: Unified Associativity: 12-way Set-associative ``` ## 理解程式碼 讓我們先從 main.c 開始理解 * 程式先開啟 `words.txt` ```clike /* check file opening */ fp = fopen(DICT_FILE, "r"); if (fp == NULL) { printf("cannot open the file\n"); return -1; } ``` * 建立 entry 型態的 linked list ( `entry` 為一個 struct ,在 `phonebook_orig.h` 中有定義 ) ```clike /* build the entry */ entry *pHead, *e; pHead = (entry *) malloc(sizeof(entry)); printf("size of entry : %lu bytes\n", sizeof(entry)); e = pHead; e->pNext = NULL; ``` * 如果編譯器為 gcc ,則使用 gcc 的內建函式 `__builtin___clear_cache (char *begin, char *end)` 來清快取 ```clike #if defined(__GNUC__) __builtin___clear_cache((char *) pHead, (char *) pHead + sizeof(entry)); #endif ``` * 紀錄開始的時間到變數 `start` 裡面 ```clike clock_gettime(CLOCK_REALTIME, &start); ``` * 將 `fp` 所指向的文件內所有字串用 `append()` 串成 linked list * 並紀錄結束時間至變數 `end` 裡面 ```clike while (fgets(line, sizeof(line), fp)) { while (line[i] != '\0') i++; line[i - 1] = '\0'; i = 0; e = append(line, e); } clock_gettime(CLOCK_REALTIME, &end); ``` * 這裡我們來看看 `append()` 的程式碼在幹麻 * 就是新增一個 linked list 的節點這樣 ```clike entry *append(char lastName[], entry *e) { /* allocate memory for the new entry and put lastName */ e->pNext = (entry *) malloc(sizeof(entry)); e = e->pNext; strcpy(e->lastName, lastName); e->pNext = NULL; return e; } ``` * 回到 main.c * 取得建立 linked list 所花的時間 ```clike cpu_time1 = diff_in_second(start, end); ``` * 推估是要檢驗 `findName()` 的搜尋結果是否正確 * `assert()` 的用法與作用在 [作業解說影片](https://www.youtube.com/watch?v=ZICRLKf_bVw) 大概25分鐘的地方有提到 ```clike assert(findName(input, e) && "Did you implement findName() in " IMPL "?"); assert(0 == strcmp(findName(input, e)->lastName, "zyxel")); ``` * 欣賞一下 `findName()` 的程式碼 * 做的事情就是沿著 linked list 一個一個找有沒有相符的 lastName * `strcasecmp()` 其實就是 `strcmp()` 忽略大小寫的版本 ```clike entry *findName(char lastName[], entry *pHead) { while (pHead != NULL) { if (strcasecmp(lastName, pHead->lastName) == 0) return pHead; pHead = pHead->pNext; } return NULL; } ``` * 回到主函式 * 這裡就是執行 `findName()` 並計算其執行時間 ```clike /* compute the execution time */ clock_gettime(CLOCK_REALTIME, &start); findName(input, e); clock_gettime(CLOCK_REALTIME, &end); cpu_time2 = diff_in_second(start, end); ``` * 輸出結果 ```clike FILE *output = fopen(OUT_FILE, "a"); fprintf(output, "append() findName() %lf %lf\n", cpu_time1, cpu_time2); fclose(output); printf("execution time of append() : %lf sec\n", cpu_time1); printf("execution time of findName() : %lf sec\n", cpu_time2); ``` ## 基本操作 理解程式在幹麻後就可以來進行操作了 編譯之後執行 `$ make run` 指令,下面是執行結果: ``` size of entry : 136 bytes execution time of append() : 0.057965 sec execution time of findName() : 0.005580 sec 3 ``` * 觀察 * `append()` 的執行時間通常都是在 0.05 sec ,不過有時候會跳到零點幾秒 * `findName()` 的執行時間則比較平均 執行 `$ make plot` 指令,得到下圖: ![](https://i.imgur.com/nmyNMXD.png) 因為我還沒實做 optimized 版本的程式碼,所以兩個版本的統計時間都是同一次執行 phonebook_orig 所得出的數據 再來直接用 `perf` 來觀察 phonebook_orig 的 cache-miss: ``` sudo perf stat -r 100 -e cache-misses,cache-references,instructions,cycles ./phonebook_orig ``` 得到下面的結果 ``` Performance counter stats for './phonebook_orig' (100 runs): 4,712,807 cache-misses # 94.628 % of all cache refs ( +- 0.08% ) 4,980,334 cache-references ( +- 0.09% ) 262,305,959 instructions # 1.47 insn per cycle ( +- 0.02% ) 179,020,704 cycles ( +- 0.09% ) 0.064731472 seconds time elapsed ( +- 0.77% ) ``` 可以發現 cache-miss 達到 94.6% ,效能有待加強。 :::danger 「有待加強」這種官話就不要說了,程式是死的,分析才是你的專業。 :notes: jserv ::: >好的,我下次會多加思考跟注意的,謝謝老師。 >[name=hungyuhang] ## 改善效能的作法 ### 減少 Entry size 要改善 cache miss ,最直觀的方式就是減少 `entry` 的大小。 考量到在搜尋的時候,除了 last name 之外的其他資訊並沒有什麼作用,所以就把詳細資訊從原本的 `entry` 分離出來: ```clike typedef struct __PHONE_BOOK_ENTRY_DETAILS { char firstName[16]; char email[16]; char phone[10]; char cell[10]; char addr1[16]; char addr2[16]; char city[16]; char state[2]; char zip[5]; } entryDetails; typedef struct __PHONE_BOOK_ENTRY { char lastName[MAX_LAST_NAME_SIZE]; struct __PHONE_BOOK_ENTRY_DETAILS *pDetails; struct __PHONE_BOOK_ENTRY *pNext; } entry; ``` 並且在 `append()` 裡面另外幫詳細資訊另外 malloc 一個空間: ```clike e->pDetails = (entryDetails *) malloc(sizeof(entryDetails)); ``` 然後以下是執行結果: ``` Performance counter stats for './phonebook_orig' (100 runs): 4,804,625 cache-misses # 95.372 % of all cache refs ( +- 0.09% ) 5,037,773 cache-references ( +- 0.10% ) 263,494,929 instructions # 1.42 insn per cycle ( +- 0.02% ) 185,823,220 cycles ( +- 0.26% ) 0.070190806 seconds time elapsed ( +- 0.46% ) ``` ``` Performance counter stats for './phonebook_opt' (100 runs): 5,908,561 cache-misses # 95.667 % of all cache refs ( +- 0.10% ) 6,176,199 cache-references ( +- 0.12% ) 344,247,212 instructions # 1.35 insn per cycle ( +- 0.02% ) 254,166,940 cycles ( +- 0.15% ) 0.094239414 seconds time elapsed ( +- 0.28% ) ``` ![](https://i.imgur.com/z9FQLOm.png) * 測試結果 * 兩個函式的執行時間都變得更長了。而函式 `findName()` 在把 `entry` 的大小縮小至 32bytes 後效能不增反減的原因,推估是因為記憶體不連續的關係。 `append()` 也因為要 malloc 更多的記憶體而花更多時間。 * 改善方式 * 參考[ ryanwang522 的開發紀錄](https://hackmd.io/iUY7i3c3Ty-GXXQs70Krmw?view) ,先不對 `entryDetails` 做 `malloc()` ,日後有需要再另外去配置記憶體就好,這樣也能避免掉記憶體不連續的問題。 :::danger 記憶體不連續的影響在哪?可否從 [Virtual address space](https://en.wikipedia.org/wiki/Virtual_address_space) 的影響去解釋? :notes: jserv ::: >因為 cache 在載入資料的時候,並不是只載入需要的資料,而是一次載入一整個 cache line 的大小的資料。如果記憶體不連續的話,今天 findName() 在經過 100 的 linked-list object 的時候,他需要載入的 cache line 數量可能就會比記憶體連續的情況下還多。另外在閱讀 virtual address space 的一些相關資料之後,我還是沒有發現這裡跟這個東西的關聯之處,希望老師能再行指點,謝謝![name=hungyuhang] 以下是嘗試改善後的執行結果: ``` Performance counter stats for './phonebook_opt' (100 runs): 1,566,594 cache-misses # 68.546 % of all cache refs ( +- 0.37% ) 2,285,454 cache-references ( +- 0.15% ) 245,521,595 instructions # 2.04 insn per cycle ( +- 0.02% ) 120,161,738 cycles ( +- 0.17% ) 0.044845395 seconds time elapsed ( +- 0.59% ) ``` ![](https://i.imgur.com/ZRTOzV7.png) 可以看到 cache-miss 已經從 95% 降至 69% ,而兩個函式的執行時間也有顯著的改善。 --- ### 實做 Hash Function * 使用 Hash Function 的優點 * 使原來搜索 phonebook 的時間複雜度從 O(n) 變成 O(1) * Hash Function 的選擇 * BKRD Hash ```clike unsigned int BKDRHash(char *str) { unsigned int seed = 131; // 31 131 1313 13131 131313 etc.. unsigned int hash = 0; while (*str) { hash = hash * seed + (*str++); } return (hash & (MAX_HASH_SIZE - 1)); // hash & 0x000001FF } ``` BKDR Hash Function 參考自[這裡](https://www.byvoid.com/zht/blog/string-hash-compare) * Hash Table Size 的選擇 * 因為我 L1d cache 的大小是 32K ,然後 `entry *` 的大小是 8 bytes ( 64 位元) ,故選擇 32K / (8 * 8) = 512 作為 `MAX_HASH_SIZE` 的值。 在[這篇](https://blog.kdchang.cc/2016/09/23/javascript-data-structure-algorithm-dictionary-hash-table/)文章裡面有提到處理雜湊表衝突的辦法,其中有兩種讓我感興趣的: * 分離鍵結 * 當遇到衝突時,就用 linked list 的方式再往後面多鍵接一個 struct 。 ![](https://i.imgur.com/bCD67Dd.gif) * 線性探索 * 當在第 n 個 bucket 遇到衝突時,則將值存入第 n+1 個 bucket ,如果也是被佔據的話則將值存入第 n+2 個 bucket ,以此類推。 在這邊就用兩種方法實做看看 #### 分離鍵結 一開始實作,結果測試的時候 `append()` 的總時間居然超過六秒 ``` size of entry : 32 bytes execution time of append() : 6.451466 sec execution time of findName() : 0.000009 sec ``` 將`MAX_HASH_SIZE` 的值調整到 16384 之後, `append()` 的時間降至 0.2 秒,如下面的資訊: ``` size of entry : 32 bytes execution time of append() : 0.287090 sec execution time of findName() : 0.000000 sec ``` ``` Performance counter stats for './phonebook_opt' (100 runs): 9,952,966 cache-misses # 53.746 % of all cache refs ( +- 0.14% ) 18,518,459 cache-references ( +- 0.10% ) 270,336,567 instructions # 0.37 insn per cycle ( +- 0.02% ) 733,241,873 cycles ( +- 0.10% ) 0.284569326 seconds time elapsed ( +- 0.10% ) ``` * 觀察與想法 * 當 `MAX_HASH_SIZE` 的值比較小,代表到後來每個 bucket 後面的 linked list 都會有相當的長度,如果這時候要使用 `append()` 來新增物件的話,就會花很多的執行時間。而將 `MAX_HASH_SIZE` 加大之後大幅降低 `append()` 的執行時間也印證了這個說法。 * `findName()` 執行時間的改善原因推估也是因為一樣的原理。 * cache-miss 也降低到 54% 。 但是 `append()` 的執行時間還是過長,不知道為什麼沒有在別人的共筆上面看到類似的問題。 * 改善方案 在 hash table 的每一個 bucket 裡面增加一個指標,指向該 bucket 所擁有的 linked list 的尾端,這麼一來在增加新的 object 的時候就不用經過整個 linked list 。 測試結果: ``` size of entry : 32 bytes execution time of append() : 0.044586 sec execution time of findName() : 0.000007 sec Performance counter stats for './phonebook_opt' (100 runs): 546,952 cache-misses # 44.898 % of all cache refs ( +- 0.27% ) 1,218,207 cache-references ( +- 0.22% ) 237,923,280 instructions # 1.89 insn per cycle ( +- 0.02% ) 126,121,608 cycles ( +- 0.14% ) 0.045734427 seconds time elapsed ( +- 0.32% ) ``` 很欣慰的, `append()` 的執行時間變回正常的值了,而 cache-miss 也降低了一些,我的猜想是之前實做的 `append()` 可能不只增加執行時間也同時升高了 cache-miss (要存取更多的 linked list 物件所以連帶產生更多的 cache-miss )。 接下來我決定試試將不同的值代入 `MAX_HASH_SIZE` 裡面 ( 目前是 512 ) 以下是結果: * `MAX_HASH_SIZE` = 2048 ``` size of entry : 32 bytes execution time of append() : 0.045723 sec execution time of findName() : 0.000001 sec Performance counter stats for './phonebook_opt' (100 runs): 563,158 cache-misses # 35.935 % of all cache refs ( +- 0.34% ) 1,567,170 cache-references ( +- 0.33% ) 237,919,696 instructions # 1.84 insn per cycle ( +- 0.02% ) 129,186,186 cycles ( +- 0.16% ) 0.046867045 seconds time elapsed ( +- 0.32% ) ``` * `MAX_HASH_SIZE` = 8192 ``` size of entry : 32 bytes execution time of append() : 0.046086 sec execution time of findName() : 0.000000 sec Performance counter stats for './phonebook_opt' (100 runs): 642,951 cache-misses # 26.916 % of all cache refs ( +- 0.80% ) 2,388,694 cache-references ( +- 0.25% ) 238,001,680 instructions # 1.73 insn per cycle ( +- 0.02% ) 137,191,041 cycles ( +- 0.23% ) 0.049937660 seconds time elapsed ( +- 0.39% ) ``` * `MAX_HASH_SIZE` = 20000 ``` size of entry : 32 bytes execution time of append() : 0.044738 sec execution time of findName() : 0.000018 sec Performance counter stats for './phonebook_opt' (100 runs): 565,576 cache-misses # 43.535 % of all cache refs ( +- 0.39% ) 1,299,122 cache-references ( +- 0.63% ) 238,245,416 instructions # 1.88 insn per cycle ( +- 0.02% ) 126,945,824 cycles ( +- 0.15% ) 0.045927047 seconds time elapsed ( +- 0.35% ) ``` 又多試了好幾個數值之後,我發現了一些滿有趣的現象: * 當 hash table 的大小為 2 的次方時 * 有較低的 cache-miss * `findName()` 的執行時間明顯較低 * 但是 `append()` 所花時間卻略長 * 當 hash table 的大小不為 2 的次方時(例如 20000 ) * 有較高的 cache-miss * `findName()` 的執行時間明顯較長 * 但是 `append()` 所花時間卻稍短 * hash table 的大小太大時 cache-miss 會增加。 * 探討 粗略推測, hash table 的大小跟二的次方之間的關係可能跟對齊有關,待日後研究。但說真的為什麼在這個大小的 table size 之下的 miss-rate 會比較低我還是毫無頭緒。 #### 線性探索 考量到[作業要求](https://hackmd.io/s/SykZ8IMOf#%E4%BD%9C%E6%A5%AD%E8%A6%81%E6%B1%82)中有提到: > 改善電話簿的效能分析,透過大量的重複執行,找出 95% 信賴區間,而且允許動態新增資料 (較符合現實) 和查詢 > 如果用線性探索來建立 Hash Table 的話,因為得要每個字串都能放進 bucket 裡面,所以table size 勢必得要大於單字的總量,但是如果允許動態新增資料的話,就無法確定到底有幾個字串,也無法確定 hash table 的大小,再想想有沒有其他實作的辦法。 ## 問題探討 * 有代表性嗎?跟真實英文姓氏差異大嗎? * 真實英文姓氏會有頗高的機率會重複,所以在搜尋的時候勢必不能只靠 last name 來搜尋。 * 資料難道「數大就是美」嗎? * 資料量大固然可以紀錄更多資訊,但是相同的,程式撰寫者就得必須花更多的心思,來做程式的優化,而且不一定有辦法能讓程式效能相等於資料量小的。所以應該在資料量跟效能之間做一個中間值得取捨。 * 如果我們要建立符合真實電話簿情境的輸入,該怎麼作? * 建立一個互動式的界面,能讓使用者進一步用其他資訊去確認該筆資料是否就是自己正在找的那一筆。

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully