etc276
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # 2017q1 Homework3 (software-pipelining) contributed by < `ect276` > ###### tags: `嵌入式` **[code here]("https://github.com/etc276/prefetcher")** ## 開發環境 - Ubuntu 16.10 ( 64 bit ) ``` Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 2 Core(s) per socket: 2 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 61 Model name: Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz Stepping: 4 CPU MHz: 2400.878 CPU max MHz: 3000.0000 CPU min MHz: 500.0000 BogoMIPS: 4788.90 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 4096K NUMA node0 CPU(s): 0-3 ``` ## 預期目標 * 學習計算機結構並且透過實驗來驗證所學 * 理解 prefetch 對 cache 的影響,從而設計實驗來釐清相關議題 * 論文閱讀和思考 ## 論文閱讀 ### 研究目標 --- 探討 SW(軟體) 和 HW(硬體) 在 prefetch(預取) 對於效能影響的機制與交互作用 ### 名詞解釋 --- - cache (一種 memory) ![](https://i.imgur.com/REMWcJM.png) 由於 CPU 的處理速度大於 RAM 的,所以藉由將等下會用到的資料先放到 cache,以彌補之間的速度差。而因為 cache 的空間較小,所以 prefetch 的技術更顯得重要,才能有效降低 cache miss,而提升效能。 - latency > cache 對速度有什麼影響呢?這可以由 latency 來表示。CPU 在從記憶體中讀取資料(或程式)時,會需要等待一段時間,這段時間就是 latency,通常用 cycle 數表示。 資料來源:[CPU 的 cache 和 latency [Part 1]]("https://www.csie.ntu.edu.tw/~r89004/hive/cache/page_1.html") - overhead overhead 可理解為額外的 penalty,也就是當 cache miss 發生時,才會有的多餘時間,相對於 latency 是總共的 (就算 cache hit 也會有) 資料來源:[記憶體系統-快取記憶體]("http://systw.net/note/af/sblog/more.php?id=252") - cache miss vs. cache pollution cache miss 是結果,也就是 CPU 實際預存取的時候是失敗的。而 cache pollution 是原因,例如過早存取 (early prefetch) 以至於實際要用時已被 replaced,或是預先存取了用不到的data (incorrect prefetch) ### 1. Introduction --- 各種 SW 和 HW 組合對效能的影響 ( x 軸是各種 benchmark) ![](https://i.imgur.com/Clq8W42.png) ### 2. BACKGROUND ON SOFTWARE AND HARDWARE PREFETCHING --- 不同資料結構與其相對應的 prefetch 的方法 ![](https://i.imgur.com/2MrnrdY.png) - stream:英文原意為"流",可以先理解為 row major 然後走 row - stride:英文原意為"邁",也就是踏一大步的意思,可以先理解為 走 col - RDS 在 HW 上就會需要用到指標的幫忙了,並不單純為硬體 - Hash 在 HW 上只能倚賴平行化或預先計算來 prefetch,我理解為不能純硬體了 #### 2.1 Software Prefetch Intrinsics - intrinsic (內建函式):function call 時會根據 standard library 編譯成 built-in 的程式碼 (有點像 memset 這個 function 會被編譯成怎麼樣的組語,或是以下) ![](https://i.imgur.com/cpQx4tB.png) 參考資料:[What are intrinsics?]("http://stackoverflow.com/questions/2268562/what-are-intrinsics") - inline (內嵌、內聯函式):可以藉由人工或給編譯器建議,將 function 展開以減少 overhead (在 complie 階段將 function 轉成 instruction cache),用在少量程式碼的 function 可提升效能,但過於多的 source code 可能會導致 cache miss. :::danger intrinsics 不是 inline functions,不要搞錯! --jserv ::: #### 2.2 Prefetch Classification 分類 prefetch 的各種情況, timely 最優 ![](https://i.imgur.com/fA8Iowk.png) ![](https://i.imgur.com/ETESIdb.png) #### 2.3 Software Prefetch Classification ![](https://i.imgur.com/7csS1yL.png) >This value D is called the prefetch distance. We define prefetch distance as the distance ahead of which a prefetch should be requested on a memory address 由於 prefetch 的時候會 將記憶體的資料寫進 cache line 裡,也就是其實一次的 prefetch 可能不只抓一筆資料,所以選擇 prefetch distance 就顯得重要,因為若是距離太小,會造成無謂的 prefetch;距離太大則會造成 cache miss (應該要抓進來的資料被跳過了)。 prefetch 其實就是為了"隱藏" latency(預先存取,造成下次要讀取資料不需要時間的假象),所以 D 要求大於 l,不然就沒有隱藏的效果了。 > 至於除以 s 的用意還不理解,只知道跟 loop 的路徑有關,等待實驗分析後再看看。 #### 2.4 Direct and Indrect Memory Indexing - SW 在 indirect 的 prefetch 表現比 HW 好 - HW通常處理可預測的 stream, stride 模式 - indirect 可能會造成 SW prefetch overhead (e.g. 兩次 prefetch a[b[i]] ) ### 3. POSTIVE AND NEGATIVE IMPACTS OF SOFTWARE PREFETCHING --- #### 3.1 Benefits of Software Prefetching over Hardware Prefetching SW 相對於 HW 的優勢通常為較自由,能根據不同情況做調整 - Large Number of Streams (Limited Hardware Resources) - HW 能處理的 stream prefetch 受限於硬體 - 如 stream detectors和book-keeping mechanisms - 而 SW 的 stream 則可以根據不同要求做調整 - Short Streams - HW 的 prefetch 需要訓練 - 太短的 stream 就無法訓練 - 且 HW 需要 2 個 cache miss 才會調整方向 - Irregular Memory Access - Cache Locality Hint - 程式開發者使用SW prefetcher時,能自行調整 locality hint - HW prefetcher 的設計上則是大多將prefetch的資料搬移至lower-level cache ( L2 或 L3 ) - 優點: 減少 L1 cache的 cache pollution - 缺點: lower-level cache 資料搬移至 L1 的時間(latency)會造成效能下降 - Loop Bounds - SW 可以確定 prefetch 的次數 - HW 則無法無法有效控制,當 distance 較大時會消耗較多 bandwidth #### 3.2 Negative Impacts of Software Prefetching - Increased Instruction Count 指令數會增加 ![](https://i.imgur.com/mvkAh6j.png) - Static Insertion 在執行時期 (runtime)無法根據以下因素做動態調整 - varying memory latency - effective cache size - bandwidth - especially in heterogeneous architectures - Code Structure Change 原本 loop 的指令就少的時候,插入多的 prefetch 指令會導致結構改變 (distance 過短無法隱藏 latency) #### 3.3 Synergistic Effects when Using Software and Hardware Prefetching Together - Handling Multiple Streams - SW 和 HW 分工 - SW 處理 irregular stream - HW 處理 regular stream - Positive Training - SW 訓練 HW - 當 SW 太慢,可以由 HW 取代 #### 3.4 Antagonistic Effects when Using Software and Hardware Prefetching Together - Negative Training - SW 的 prefetch request 會降低 HW 的速度 - SW 的指令會引發 HW 的 prefetch,導致 early - Harmful Software Prefetching - 單用 SW 的 prefetch 較為精準 - 因為 HW 是由 SW 訓練的,當 SW 出問題會連帶影響 ## 開發紀錄 ### Original - 先理解程式碼, `Makefile`裡只有簡單的`$(CC) $(CFLAGS) -o main main.c` 所以 `make` 之後只產生 `main.c`,那我先執行看看 ``` sse prefetch: 59432 us sse: 116758 us naive: 242177 us ``` - 再回去看 `main.c` 發現有 `#include "impl.c"`,也就是以上三種的實作 code - 作業要求中有提到要學習 [你所不知道的 C 語言:物件導向程式設計篇](https://hackmd.io/s/HJLyQaQMl) 裡提到的封裝方法 - 為了實作物件導向並統一介面,因此先修改 `Makefile` 並分別產生執行檔 ```c= all: $(GIT_HOOKS) main.c $(CC) $(CFLAGS) -o main main.c // before vs. after EXEC = naive_transpose sse_transpose sse_prefetch_transpose naive_transpose: main.c $(CC) $(CFLAGS) -DNAIVE_TRANSPOSE -o $@ main.c sse_transpose: main.c $(CC) $(CFLAGS) -DSSE_TRANSPOSE -o $@ main.c sse_prefetch_transpose: main.c $(CC) $(CFLAGS) -DSSE_PREFETCH_TRANSPOSE -o $@ main.c all: $(GIT_HOOKS) $(EXEC) ``` > 這邊或許可以參考 [勃興學長]("https://github.com/0140454/compute-pi/commit/89ab12a144d0ce6d2d3c498e50277fe81c42b8d4")的 commit 用 % 取代 NAIVE_TRANSPOSE (不確定用法,只是突然想到),等之後改完後面的再回來補 - 以下是 `main.c` 修改的部分內容 ```c= typedef struct object Object; typedef void (*func_t)(int *src, int *dst, int w, int h); int init_sse_transpose(Object **self) { if ((*self = malloc(sizeof(Object))) == NULL) return -1; (*self) -> transpose = sse_transpose) return 0; } Object *o = NULL; #ifdef SSE_TRANSPOSE if (init_sse_transpose(&o) == -1) printf("error."); #endif ... o->transpose(src, out0, TEST_W, TEST_H); // change from // sse_prefetch_transpose(src, out0, TEST_W, TEST_H); ... #ifdef SSE_TRANSPOSE printf("SSE_TRANSPOSE: \t %ld us\n", diff_in_us(start, end)); #endif ``` - 接下來要使用 perf 分析效能 ``` perf stat --repeat (次數) -e cache-misses,cache-reference,instructions,cycles ./(執行檔) ``` ||cache-misses|cache-references|instructions|cycles| |:---:|:---:|:---:|:---:|:---:|:---:|:---:| |naive|90.836%|26,325,696|1,465,339,765|1,130,960,634| |sse|85.321 %|12,703,570|1,253,388,735|771,443,403| |sse_prefetch|80.887 %|9,710,287|1,299,456,469|625,649,736| > 這邊可以看到雖然 cache-miss 沒有下降很多,但時間卻大幅降低。這是因為就算是 cache hit,還是會有 L1, L2, L3 的時間差異,例如 L1 -> reg. = 1 cycle,但 L3 -> reg. = 10 cycle (論文中也提到 SW prefetch 較有機會放到 L1)。 :::danger 請從 latency 落差去解釋數據 --jserv ::: ### 實驗一、調整 prefetch distance - 在進行實驗前,先看懂 `impl.c` ```c= void naive_transpose(int *src, int *dst, int w, int h) { for (int x = 0; x < w; x++) for (int y = 0; y < h; y++) *(dst + x * h + y) = *(src + y * w + x); } ``` - 上面就是一般的轉置,接下來看 sse 的版本 ```c= void sse_transpose(int *src, int *dst, int w, int h) { for (int x = 0; x < w; x += 4) { for (int y = 0; y < h; y += 4) { __m128i I0 = _mm_loadu_si128((__m128i *)(src + (y + 0) * w + x)); __m128i I1 = _mm_loadu_si128((__m128i *)(src + (y + 1) * w + x)); __m128i I2 = _mm_loadu_si128((__m128i *)(src + (y + 2) * w + x)); __m128i I3 = _mm_loadu_si128((__m128i *)(src + (y + 3) * w + x)); __m128i T0 = _mm_unpacklo_epi32(I0, I1); __m128i T1 = _mm_unpacklo_epi32(I2, I3); __m128i T2 = _mm_unpackhi_epi32(I0, I1); __m128i T3 = _mm_unpackhi_epi32(I2, I3); I0 = _mm_unpacklo_epi64(T0, T1); I1 = _mm_unpackhi_epi64(T0, T1); I2 = _mm_unpacklo_epi64(T2, T3); I3 = _mm_unpackhi_epi64(T2, T3); _mm_storeu_si128((__m128i *)(dst + ((x + 0) * h) + y), I0); _mm_storeu_si128((__m128i *)(dst + ((x + 1) * h) + y), I1); _mm_storeu_si128((__m128i *)(dst + ((x + 2) * h) + y), I2); _mm_storeu_si128((__m128i *)(dst + ((x + 3) * h) + y), I3); } } } ``` - 這邊有兩個要理解的地方是,sse 怎麼運作,還有為什麼會比較快 - 運作說明: - `mm_loadu` 就是把一次把 128 bit 的資料放進 reg. - `__m128i T0 = _mm_unpacklo_epi32(I0, I1)` 意思是將 I0 和 I1 的 low 128 bit 以 32 bit 為單位交錯放進 T0 - `I0 = _mm_unpacklo_epi64(T0, T1)`就是處理 high 128 bit - 參考共筆:[ierosodin]("https://hackmd.io/s/rkX95E-il")、[carolc0708]("https://hackmd.io/s/B1lDZtO0#使用avx來提升效能") ![](https://i.imgur.com/0DJa6eV.png) - 速度差異: - 先來看以下一段簡易的程式碼 ```= int p[100][100]={} for (int i=0; i<100; ++i) for (int j=0; j<100; ++i) printf ("%d ", p[i][j]); // stream for (int i=0; i<100; ++i) for (int j=0; j<100; ++i) printf ("%d ", p[j][i]); // stride ``` - 如果拿去分析會發現 `p[j][i]` 較慢,因為 C 語言是 row major,跳著讀取會比較慢 (prefetch 機制) - 而使用 sse 會比較快的原因是, naive 的寫法我們一次只讀一個 int ,而 sse 一次讀 128 bit,也就是 4 個int,也不會發生跳 row 的問題 - 再來是 sse_prefetch 的版本 ```c= #define PFDIST 8 _mm_prefetch(src+(y + PFDIST + 0) *w + x, _MM_HINT_T1); _mm_prefetch(src+(y + PFDIST + 1) *w + x, _MM_HINT_T1); _mm_prefetch(src+(y + PFDIST + 2) *w + x, _MM_HINT_T1); _mm_prefetch(src+(y + PFDIST + 3) *w + x, _MM_HINT_T1); ``` - 這邊看到兩個可以實驗的地方, PFDIST 和 locality - 先查看一個 loop 會花多久時間 (s),再除 latency (l) - 假設這邊 D=8 是 timely 的話,預期結果是 8 > 4=12 > 0=16 - 以下是實驗結果 ``` D=0 -> 102078 us D=4 -> 68133 us D=8 -> 69905 us D=12-> 82102 us D=16-> 81566 us D=20-> 111937 us ``` - 可以發現 D 差不多是以 8 為一個級距,所以原先的預期錯誤 - 值得一提的是 D=0 還是比沒有 prefetch 的 sse 的版本快 > 這邊不太理解原因,可能要用其他工具分析 latency 才能說明 ### 實驗二、修改 locality hint - 修改 `_MM_HINT_T1` 為 `T2, T3, NTA`,預期結果是時間會越來越長 ``` T1 = 81408 us T2 = 107164 us T3 = 71822 us NTA= 86694 us ``` > 為什麼... ### 實驗三、新增 avx 和 avx prefetch 版 ```c= void avx_prefetch_transpose(int *src, int *dst, int w, int h) { for ( int x = 0; x < w; x += 8 ) { for ( int y = 0; y < h; y += 8 ) { #define PFDISTHUGE 16 _mm_prefetch(src+(y + PFDISTHUGE + 0) *w + x, _MM_HINT_T0); _mm_prefetch(src+(y + PFDISTHUGE + 1) *w + x, _MM_HINT_T0); _mm_prefetch(src+(y + PFDISTHUGE + 2) *w + x, _MM_HINT_T0); _mm_prefetch(src+(y + PFDISTHUGE + 3) *w + x, _MM_HINT_T0); _mm_prefetch(src+(y + PFDISTHUGE + 4) *w + x, _MM_HINT_T0); _mm_prefetch(src+(y + PFDISTHUGE + 5) *w + x, _MM_HINT_T0); _mm_prefetch(src+(y + PFDISTHUGE + 6) *w + x, _MM_HINT_T0); _mm_prefetch(src+(y + PFDISTHUGE + 7) *w + x, _MM_HINT_T0); __m256i I0, I1, I2, I3, I4, I5, I6, I7, T0, T1, T2, T3, T4, T5, T6, T7; I0 = _mm256_loadu_si256((__m256i *)(src + (y + 0) * w + x)); I1 = _mm256_loadu_si256((__m256i *)(src + (y + 1) * w + x)); I2 = _mm256_loadu_si256((__m256i *)(src + (y + 2) * w + x)); I3 = _mm256_loadu_si256((__m256i *)(src + (y + 3) * w + x)); I4 = _mm256_loadu_si256((__m256i *)(src + (y + 4) * w + x)); I5 = _mm256_loadu_si256((__m256i *)(src + (y + 5) * w + x)); I6 = _mm256_loadu_si256((__m256i *)(src + (y + 6) * w + x)); I7 = _mm256_loadu_si256((__m256i *)(src + (y + 7) * w + x)); T0 = _mm256_unpacklo_epi32(I0, I1); T1 = _mm256_unpackhi_epi32(I0, I1); T2 = _mm256_unpacklo_epi32(I2, I3); T3 = _mm256_unpackhi_epi32(I2, I3); T4 = _mm256_unpacklo_epi32(I4, I5); T5 = _mm256_unpackhi_epi32(I4, I5); T6 = _mm256_unpacklo_epi32(I6, I7); T7 = _mm256_unpackhi_epi32(I6, I7); I0 = _mm256_unpacklo_epi64(T0, T2); I1 = _mm256_unpackhi_epi64(T0, T2); I2 = _mm256_unpacklo_epi64(T1, T3); I3 = _mm256_unpackhi_epi64(T1, T3); I4 = _mm256_unpacklo_epi64(T4, T6); I5 = _mm256_unpackhi_epi64(T4, T6); I6 = _mm256_unpacklo_epi64(T5, T7); I7 = _mm256_unpackhi_epi64(T5, T7); T0 = _mm256_permute2x128_si256(I0, I4, 0x20); T1 = _mm256_permute2x128_si256(I1, I5, 0x20); T2 = _mm256_permute2x128_si256(I2, I6, 0x20); T3 = _mm256_permute2x128_si256(I3, I7, 0x20); T4 = _mm256_permute2x128_si256(I0, I4, 0x31); T5 = _mm256_permute2x128_si256(I1, I5, 0x31); T6 = _mm256_permute2x128_si256(I2, I6, 0x31); T7 = _mm256_permute2x128_si256(I3, I7, 0x31); _mm256_storeu_si256((__m256i *)(dst + ((x + 0) * h) + y), T0); _mm256_storeu_si256((__m256i *)(dst + ((x + 1) * h) + y), T1); _mm256_storeu_si256((__m256i *)(dst + ((x + 2) * h) + y), T2); _mm256_storeu_si256((__m256i *)(dst + ((x + 3) * h) + y), T3); _mm256_storeu_si256((__m256i *)(dst + ((x + 4) * h) + y), T4); _mm256_storeu_si256((__m256i *)(dst + ((x + 5) * h) + y), T5); _mm256_storeu_si256((__m256i *)(dst + ((x + 6) * h) + y), T6); _mm256_storeu_si256((__m256i *)(dst + ((x + 7) * h) + y), T7); } } } ``` - 由於 avx 是 256 bit (相對於 sse 是 128 bit),預期時間會是比一半再多一些 (Amdahl’s Law) - 五個版本的時間比分別是 40 : 14 : 13 : 8 : 11 - 後來想到因為 avx 做的是 8`*`8 的矩陣,所以應該會比 sse 做 4`*`4 的快不到兩倍

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully