丁榮主
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    2017 q1 Homework3 (software pipeline) === >Contributed by `PeterTing` ###### tags: `pipeline` `PeterTing` ## 閱讀資料筆記 參考資料 [重點整理](https://hackmd.io/s/HJtfT3icx) [論文](http://www.cc.gatech.edu/~hyesoon/lee_taco12.pdf) ### Introduction * Inserting prefetching intrinsics 是有問題的,主要有兩個原因 1. 對於如何插入 prefetch intrinsic 沒有一個嚴格的標準 2. 軟體與硬體的 prefetch 交互使用的複雜度沒有一個很好的了解 #### 目標 * 提供在 HW prefetcher 有效的 intrinsic 使用方法 * 提供 HW & SW prefetcher 共存時,效能好的組合 #### 用三種方法來探討為什麼 benchmarks 再結合 software and hardware prefetching 後表現出 positive/neutral/negative interactions * 源代碼 * 資料結構 * regular 資料結構 (ex. Array) * irregular 資料結構 (ex. RDS - Recursive Data Structure) * 演算法 ### SOFTWARE 和 HARDWARE PREFETCHING 的背景知識 #### 資料結構 * 在實驗中嘗試 array, RDS 資料結構 * cache-line access 有兩種 * **streams** : **unit-stride** cache-line accesses * **strided** : access stride distances **greater than two** cache lines * 實驗中使用三種 hardware-based prefetcher 機制 * stream prefetcher * GHB prefetcher * content-based prefetcher #### Software Prefetch Intrinsics * 使用 SSE 的__mm_prefetch(char *ptr, int hint) 需要: * prefetch address * prefetch usage hint: ![](https://i.imgur.com/zHm4uX8.png) #### Prefetch 分類 ![](https://i.imgur.com/JCu3psd.png) ![](https://i.imgur.com/dFYj7Eo.png) * MSHR: Miss Status Holding Register #### Software Prefetch Distance * Prefetching 只有在 prefetch requests 送出的時間夠早,以至於能夠完全隱藏 Memory latency 才有用 ![](https://i.imgur.com/JsV1VpJ.png) * prefetch distance 必須要大到能夠隱藏 memory latency * *l* : prefetch latency * *s* : 迴圈當中的最短路徑 #### SOFTWARE PREFETCHING 的正負面影響 ##### 優點 * **Large Number of Streams** * HW prefetcher能處理的stream數量會受限於硬體資源 * SW prefetcher則是能在各個stream當中放入prefetch要求 * **Short Streams** * HW prefetch在training上需要時間。 * **Irregular Memory Access** * SW prefetcher(插入 intrinsic 的方式)較能對**複雜的資料結構**做prefetch * **Cache Locality Hint** * 程式開發者使用SW prefetcher時,能**自行調整locality hint** * HW prefetcher 大多將prefetch的資料搬移至lower-level cache(L2或L3) * 優點: 減少 L1 cache的cache pollution * 缺點: lower-level cache資料搬移至L1的時間(latency)會造成效能下降 * **Loop Bounds** ##### 缺點 * **Increased Instruction Count** * **Static Insertion** * **Code Structure Change** * 在迴圈內,若**指令數量太少(單一次執行時間太小)**,插入 prefetch 可能無法達到隱藏 memory latency==>prefetch distance < 1的情況,此時需要用到 **loop splitting** * #### SW + HW prefetching 優點 * **Handling Multiple Streams** * 可以一次處理更多 stream 。 **HW prefetcher** 處理**regular stream**、**SW prefetcher** 處理 **irregular stream** * **正向訓練** * SW prefetcher 可對 HW prefetcher 做正向訓練 * 例如: SW prefetch 太慢,經過訓練的 HW prefetcher 會知道要增加即時性 #### SW + HW prefetching 缺點 * **負向訓練** * **Harmful Software Prefetching** #### prefetch insertion 演算法 * prefetch 候選對象: L1 cache misses / 1000 個指令 (MPKI) > 0.05 * prefetch distance: ![](https://i.imgur.com/KcVHDSm.png) *其中 k 為常數 L 為平均的 memory latency IPCbench 是每個 benchmark 的 profiled average IPC Wloop 是迴圈平均每一輪的指令數* * 在此使用 K = 4 , L = 300 * IPCbench、Wloop 是由 benchmark 的 profile data 決定 ==> prefetch distance 的決定可以說是 profile-driven ### 實驗結果評估 #### SW prefetching 的 Overhead & Limitations * instruction overhead: 指令大量增加 ![](https://i.imgur.com/4je9gba.png) * GemsFDTD、bwaves、leslie3d 指令增加超過 50 %,bwaves 甚至超過 100 % * 增加的除了prefetch指令,也有一些是為了要處理 indirect memory accesses 和 index 計算的指令 * GemsFDTD 即使指令增加超過 50 % , SW prefetch 之後效能還是正向提升 (在 positive group) ==> **prefetch 帶來的效益超過 instruction overhead** * 分別去除一些 SW Prefetch 所帶來的負面影響 * cache pollution overhead (SW+P) * 造成 cache pollution 可能的原因為 early prefetch 或 incorrect prefetch,但兩者在實驗中很少發生,因此除去 cache pollution (圖中的 SW+P) 對 cycle 次數是否減少影響不大 * bandwidth consumption (SW+B,在 CPU 核心和記憶體之間) * 去除 bandwidth consumption (圖中的 SW+B) 的影響也不大,代表在實驗中的單執行緒應用之下, bandwidth 都還算夠用 * **memory access latency** (SW+L) (**影響最大**) * 去除 memory access latency (圖中的 SW+L)對效能影響極大,代表在此實驗之下,prefetch 並沒有達到完全隱藏 memory latency * redundant prefetch overhead (SW+R) * 即便在實驗中存在著大量的 redundant prefetch,去除 redundant prefetch overhead (圖中的 SW+R) 影響幾乎小到可以忽略 * instruction overhead (SW+I) 來觀察效能提升的狀況 * 雖說 GemsFDTD、bwaves、leslie3d 存在大量的 instruction overhead,去除之後 (圖中的 SW+I)的影響也不大 * prefetch distance 的影響: 觀察各個 benchmark 對於 prefetch distance 的敏感度 * x 軸代表和 base prefetch distance (即用 prefetch insertion algo. 定義公式所算出之距離) 相差的距離 * prefetch distance 定義公式: ![](https://i.imgur.com/JsV1VpJ.png) * 利用最佳範圍區間以及效能表現上的差異,實驗中將所有 benchmark 分為五組 ![](https://i.imgur.com/s7bpZz7.png) * 整體效能表現沒有提升或是負向的組別,多被分在 E 組,即幾乎不受 prefetch distance 影響的一組 * **大部分都在 e 組** #### HW & SW prefetcher 同時使用的效果 * HW Prefetcher 訓練效果 * 結論: 雖然訓練後有些 benchmark 效能能有正向成長,但是會造成效能退化的幅度遠大於正向成長的。因此在一般的情況來說,用 SW prefetching 來訓練 HW prefetcher 並不是一個很好的選擇 * Prefetch Coverage (ratio) * Coverage: Percentage of misses avoided due to prefetching 100 x (Prefetch Hits / (Prefetch Hits + Cache Misses)) ![](https://i.imgur.com/IrcvW4y.png) (a) GHB 和 STR 的有效 prefetch coverage (和所有的 L2 miss 相比) (b) SW 、SW + GHB 、SW + STR 的 prefetch coverage。 HW prefetch 產生額外有效的 prefetch 以白色表示。 * **coverage 太低是造成效能無法提升的主要原因** #### Profile-Guided Optimization (PGO) * 表現介在 base 和 SW 之間,但效能的正向提升並非都來自於 prefetch ==>PGO 的全部指令以及 prefetch 的指令是有減少的 #### 適合處理 Short Streams 的 HW prefetcher * 結論: SW prefetch 對 short stream 的處理還是比 ASD 好 #### Content Directed Prefetching (CDP) * 目的: 處理 linked 或是 irregular 資料結構下的 prefetch * SW prefetch (效能有 78% 的提升),相較之下還是比較適合用來處理 irregular 資料結構 #### 總結新發現 (1) 即使在處理 regular access patterns,如 streams,HW prefetcher 時常有割雞焉用牛刀的情形。特別是在處理 short stream 的時候,此時使用 SW prefetch 會比較有效率 (2) SW prefetch distance 對於 HW 的組態不太敏感。但即使如此還是必須好好設定 prefetch distance。在一般的應用中,prefetch distance 只要大於定義公式算出的最小距離,效能上都不會差太多 (3) 雖然大部分的 L1 cache misses 會被 out-of-order 執行的處理器容忍,但在 L1 cache misses 太大時 (> 20 %),還是採取 prefetch 降低 L1 cache misses 對效能的提升比較有效率 (4) 大部分有 prefetch instruction 的應用也同時會遇到 memory operation 的限制,因次在 SW prefetching 中,instruction overhead 對效能的影響並不是很大。 (5) SW prefetching 可以用來訓練 HW prefetcher 因此能夠獲得效能上的提升。但是在有些情況中,也可能因此造成效能嚴重下降,必須小心使 ## 開發紀錄 ### 指令意義 * __mm_loadu_si128 (__m128i *p): Loads 128-bit value. * __m128i _mm_unpacklo_epi32 (__m128i a, __m128i b) : Interleaves the lower 2 signed or unsigned 32-bit integers in a with the lower 2 signed or unsigned 32-bit integers in b 原本一直看不懂 code 再幹嘛,直到看了這篇[文章](https://www.randombit.net/bitbashing/2009/10/08/integer_matrix_transpose_in_sse2.html)才發現原來 SSE2 intrinsics 還能這麼用! ![](https://i.imgur.com/2qHvtFo.png) ### 來看看執行結果: ``` peterting@peterting-MacBookAir:~/hw1/prefetcher$ ./main 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 4 8 12 1 5 9 13 2 6 10 14 3 7 11 15 sse prefetch: 66406 us sse: 138834 us naive: 257251 us ``` * sse_prefetch 比 sse 快了差不多兩倍 * sse 又比 naive 快了差不多兩倍 ### 初步 perf 分析 * naive ``` Performance counter stats for './time_test_naive' (5 runs): 23,741,359 cache-misses # 92.326 % of all cache refs ( +- 0.27% ) 25,714,599 cache-references ( +- 0.21% ) 1,448,480,256 instructions # 1.39 insn per cycle ( +- 0.00% ) 1,041,838,184 cyc les ( +- 0.95% ) 0.413386175 seconds time elapsed ( +- 0.59% ) ``` * sse ``` Performance counter stats for './time_test_sse' (5 runs): 11,265,217 cache-misses # 88.392 % of all cache refs ( +- 0.91% ) 12,744,561 cache-references ( +- 0.23% ) 1,239,483,973 instructions # 1.57 insn per cycle ( +- 0.05% ) 791,477,045 cycles ( +- 1.56% ) 0.316970817 seconds time elapsed ( +- 1.52% ) ``` * sse_prefetch ``` Performance counter stats for './time_test_sse_prefetch' (5 runs): 8,154,781 cache-misses # 82.791 % of all cache refs ( +- 0.54% ) 9,849,849 cache-references ( +- 0.34% ) 1,282,597,351 instructions # 2.12 insn per cycle ( +- 0.00% ) 604,796,586 cycles ( +- 0.31% ) 0.227500790 seconds time elapsed ( +- 0.39% ) ``` * 從上面的數據可以看到,其實下降最多的是 cache-references ,使用 sse intrinsics 加上 loop_unrolling 可以一次抓入很多資料,減少 cache-references ,再加上 prefetch 藉由先抓取資料使 cache-references 更少,進而增加效能。 * 使用 prefetch 後,一個 cycle 內可以執行的 instruction 跟沒使用 prefetch 的 sse 比較增加了0.55個。 * 跟論文所實驗出來的一樣,使用 prefetch 後, instruction 的量也會增加, cache-miss 減少。 #### 更細部的分析 ``` Performance counter stats for './time_test_naive' (5 runs): 25,022,030 cache-misses # 95.074 % of all cache refs ( +- 2.27% ) (49.48%) 26,318,442 cache-references ( +- 1.71% ) (50.09%) 1,437,875,051 instructions # 1.27 insn per cycle ( +- 0.81% ) (62.93%) 1,130,770,516 cycles ( +- 2.92% ) (63.19%) 553,624,632 L1-dcache-load ( +- 0.35% ) (63.55%) 21,039,026 L1-dcache-load-misses # 3.80% of all L1-dcache hits ( +- 0.47% ) (63.23%) 16,824,808 LLC-loads ( +- 0.94% ) (62.54%) 16,401,746 LLC-load-misses # 97.49% of all LL-cache hits ( +- 1.22% ) (49.44%) 0.482699928 seconds time elapsed ( +- 2.42% ) ``` ``` Performance counter stats for './time_test_sse' (5 runs): 11,560,588 cache-misses # 90.962 % of all cache refs ( +- 0.49% ) (49.62%) 12,709,243 cache-references ( +- 1.58% ) (50.03%) 1,237,685,613 instructions # 1.58 insn per cycle ( +- 1.16% ) (62.94%) 784,200,462 cycles ( +- 0.84% ) (63.03%) 443,082,381 L1-dcache-load ( +- 0.35% ) (63.50%) 8,410,213 L1-dcache-load-misses # 1.90% of all L1-dcache hits ( +- 0.52% ) (63.55%) 4,249,844 LLC-loads ( +- 2.39% ) (62.55%) 4,240,328 LLC-load-misses # 99.78% of all LL-cache hits ( +- 1.87% ) (49.33%) 0.338988874 seconds time elapsed ( +- 1.40% ) ``` ``` Performance counter stats for './time_test_sse_prefetch' (5 runs): 8,723,431 cache-misses # 87.511 % of all cache refs ( +- 2.41% ) (49.65%) 9,968,407 cache-references ( +- 3.49% ) (50.05%) 1,279,998,675 instructions # 1.92 insn per cycle ( +- 0.81% ) (62.74%) 666,755,588 cycles ( +- 1.99% ) (63.35%) 461,306,862 L1-dcache-load ( +- 0.54% ) (64.27%) 8,469,261 L1-dcache-load-misses # 1.84% of all L1-dcache hits ( +- 1.63% ) (63.80%) 4,294,555 LLC-loads ( +- 4.45% ) (62.69%) 4,080,283 LLC-load-misses # 95.01% of all LL-cache hits ( +- 3.62% ) (49.12%) 0.284974582 seconds time elapsed ( +- 3.21% ) ``` * 發現有無使用 sse 對於 L1-cache-load 次數差異很大,不過有無 prefetch 沒太多的影響 * LLC-cache-load 的結論與上面一樣 * 不論是何種實作方法,L1-cache-load-misses 的百分比都不高,而 LLC-load-misses 的百分比都很高 > 為什麼? > ### 更改使用之 intrinsic 仔細去看 intrinsic 的說明,會發現 `_mm_loadu_si128` 沒有強制 aligned , 這樣可能會導致需要讀取兩次才能將資料讀取完畢,導致效能降低,因此試試看對 sse method 使用 ``` Performance counter stats for './time_test_sse' (5 runs): 11,094,578 cache-misses # 87.194 % of all cache refs ( +- 0.27% ) 12,724,009 cache-references ( +- 0.13% ) 1,236,514,600 instructions # 1.66 insn per cycle ( +- 0.00% ) 745,361,642 cycles ( +- 0.84% ) 0.297265198 seconds time elapsed ( +- 1.18% ) ``` 可以發現與上面相比,Cycles 減少 46,000,000 cycles , 並且平均減少了 0.02s 左右 ### 使用 AVX 優化 剛開始打完 code 要測驗時,發現一直過不了 assert 那關,試著把 assert 偷偷註解掉,可以跑了,不過最後會出現 `Stack smashing detected`,後來參考 [此篇文章](https://read01.com/o4O84L.html),發現應該是因為 Array size 不夠存,因此後來把 Array size 調成 8 * 8 的試試看,結果成功!估計是因為 AVX 一次需要 8個 integer , 因此輸出時也是用 8 * 8 的矩陣來輸出。 ``` Performance counter stats for './time_test_avx' (5 runs): 9,168,311 cache-misses # 89.814 % of all cache refs ( +- 3.67% ) (49.47%) 10,208,139 cache-references ( +- 2.60% ) (49.63%) 1,141,924,638 instructions # 1.79 insn per cycle ( +- 1.09% ) (62.34%) 638,801,872 cycles ( +- 1.00% ) (62.63%) 409,850,660 L1-dcache-load ( +- 1.11% ) (63.70%) 7,951,142 L1-dcache-load-misses # 1.94% of all L1-dcache hits ( +- 2.01% ) (64.09%) 3,314,556 LLC-load ( +- 2.20% ) (63.13%) 3,382,726 LLC-load-misses # 102.06% of all LL-cache hits ( +- 3.67% ) (49.50%) 0.255533258 seconds time elapsed ( +- 1.49% ) ``` 使用 AVX 優化後效能有小幅度的提升 ### 使用物件導向 >提問: >如果使用 int init_object(Object *self) ,而傳進來的數值是一個 struct pointer 的地址,那麼 malloc 空間給self 應該也會直接給到傳進來的 Struct pointer 不是嗎? 看完了[物件導向程式設計篇](https://hackmd.io/s/HJLyQaQMl#)後簡單的實作了一下 ```c typedef struct _TransposeInfo TransposeInfo; typedef void (*func_t)(TransposeInfo *); struct _TransposeInfo { int *src; int *dst; int w; int h; func_t strategy;//指向要使用的策略 }; ``` 利用 TransposeInfo 將 transpose method 實作所需要的資料都放在裏面,並且利用 func_t strategy 讓我們能夠動態的決定所要使用的策略為何 * 解決 main.c 中太測試時間時使用太多重複的 code 的問題 參考 [jack的共筆](https://hackmd.io/s/rJRG6D4ix#) 封裝不同實作相同介面 根據在 Makefile 中的定義可以使 name 以及 transposeMethod 變成所定義的 Method 以及 Method 的 name ```c 76 struct timespec start, end; 77 int *src = (int *) malloc(sizeof(int) * TEST_W * TEST_H); 78 int *out = (int *) malloc(sizeof(int) * TEST_W * TEST_H); 79 80 srand(time(NULL)); 81 for (int y = 0; y < TEST_H; y++) 82 for (int x = 0; x < TEST_W; x++) 83 *(src + y * TEST_W + x) = rand(); 84 clock_gettime(CLOCK_REALTIME, &start); 85 transposeMethod(src, out, TEST_W, TEST_H); 86 clock_gettime(CLOCK_REALTIME, &end); 87 printf("%s \t %ld us\n", name, diff_in_us(start, end)); 88 free(src); 89 free(out); ```

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully