Allen Tzeng
  • NEW!
    NEW!  Connect Ideas Across Notes
    Save time and share insights. With Paragraph Citation, you can quote others’ work with source info built in. If someone cites your note, you’ll see a card showing where it’s used—bringing notes closer together.
    Got it
      • Create new note
      • Create a note from template
        • Sharing URL Link copied
        • /edit
        • View mode
          • Edit mode
          • View mode
          • Book mode
          • Slide mode
          Edit mode View mode Book mode Slide mode
        • Customize slides
        • Note Permission
        • Read
          • Only me
          • Signed-in users
          • Everyone
          Only me Signed-in users Everyone
        • Write
          • Only me
          • Signed-in users
          • Everyone
          Only me Signed-in users Everyone
        • Engagement control Commenting, Suggest edit, Emoji Reply
      • Invite by email
        Invitee

        This note has no invitees

      • Publish Note

        Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note No publishing access yet

        Your note will be visible on your profile and discoverable by anyone.
        Your note is now live.
        This note is visible on your profile and discoverable online.
        Everyone on the web can find and read all notes of this public team.

        Your account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

        Your team account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

        Explore these features while you wait
        Complete general settings
        Bookmark and like published notes
        Write a few more notes
        Complete general settings
        Write a few more notes
        See published notes
        Unpublish note
        Please check the box to agree to the Community Guidelines.
        View profile
      • Commenting
        Permission
        Disabled Forbidden Owners Signed-in users Everyone
      • Enable
      • Permission
        • Forbidden
        • Owners
        • Signed-in users
        • Everyone
      • Suggest edit
        Permission
        Disabled Forbidden Owners Signed-in users Everyone
      • Enable
      • Permission
        • Forbidden
        • Owners
        • Signed-in users
      • Emoji Reply
      • Enable
      • Versions and GitHub Sync
      • Note settings
      • Note Insights New
      • Engagement control
      • Make a copy
      • Transfer ownership
      • Delete this note
      • Save as template
      • Insert from template
      • Import from
        • Dropbox
        • Google Drive
        • Gist
        • Clipboard
      • Export to
        • Dropbox
        • Google Drive
        • Gist
      • Download
        • Markdown
        • HTML
        • Raw HTML
    Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
    Create Create new note Create a note from template
    Menu
    Options
    Engagement control Make a copy Transfer ownership Delete this note
    Import from
    Dropbox Google Drive Gist Clipboard
    Export to
    Dropbox Google Drive Gist
    Download
    Markdown HTML Raw HTML
    Back
    Sharing URL Link copied
    /edit
    View mode
    • Edit mode
    • View mode
    • Book mode
    • Slide mode
    Edit mode View mode Book mode Slide mode
    Customize slides
    Note Permission
    Read
    Only me
    • Only me
    • Signed-in users
    • Everyone
    Only me Signed-in users Everyone
    Write
    Only me
    • Only me
    • Signed-in users
    • Everyone
    Only me Signed-in users Everyone
    Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note No publishing access yet

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.

    Your account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

    Your team account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

    Explore these features while you wait
    Complete general settings
    Bookmark and like published notes
    Write a few more notes
    Complete general settings
    Write a few more notes
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Support Vector Machine ( SVM ) 支撐(持)向量機 === ###### tags: `李宏毅` `Maching Learning` ``` 本文主要以李宏毅老師的 SVM 課程為主,用比較簡單的方式闡述 SVM 我個人認為這算是比較精簡的版本,如果要全盤且正式的理解 SVM, 建議可以參考林軒田老師「機器學習技法」中關於 SVM 的課程 ``` ## Support Vector Machine ( SVM ) 當我們利用 PLA 對於資料做分類時,每一次找出來的解有非常大的機率會不同,而且都不見得會是最佳解。正因為如此,我們接下來會問的問題是 : 「究竟哪一個解 ( Hyperplane ) 會是最佳解 ? 」 假如我們考量到,在未來的資料中,可能或多或少會有隨機分布的 Noise 在資料中,這樣的 Noise 不應該影響到我們的分類,則我們會希望最好的 Hyperplane 應該是能夠與各分類資料距離越大越好,也就是說,能夠有更多的誤差容忍度。( 如下圖左 ) ![](https://i.imgur.com/dPQQBep.png) 相同的狀況,我們可以換一個角度來說 : **「 我們希望找到的分類器能讓各分類的邊界 ( Margin ) 最大化 」** 想要達到這樣的結果,在 SVM 中便用了兩個概念 : 1. Hinge Loss 2. Kernel Method ## Hinge Loss 讓我們來考慮一個二元分類 ( Binary Classification ) : $\left\{(x^n,\hat{y}^n)\right\}_{n=1}^{\infty}$ and $\hat{y}^n=\left\{+1,-1\right\}$ ### Function Set 對於一個 Model 來說, Hypothesis Set 或稱 Function Set 我們可以這樣表示 : $$ \forall g\in Hypothesis\ Set\\ g(x)=\begin{cases}+1,& f(x)>0\\ -1,& f(x)<0\end{cases} $$ $f$ 是訓練出來的,如果是 Linear Separable 的狀況,則$f(x)=W^Tx$ ( 李宏毅老師在這邊所用的符號表示並沒有錯,但我覺得有點違背我們在符號使用上的習慣,這裡所說的 $f$ ,在林軒田的課程內被稱為 " Score ",我覺得會比較直覺且容易了解,但為了引用李老師的 Course Slide 符號上面我便不予更動 ) ### Loss Function 既然 $f$ 是被訓練出來的,我們必須要定義其損失涵數 $L(f)$ ,才能據其進行 $f$ 的優化。間單一點,我們可以直接計算分類錯誤的次數作為損失函數 $$ L(f)=\sum_{n}\delta\big(g(x^n)\neq\hat{y}^n\big) $$ 如果我們想要利用 GD 來做優化,上面的損失函數是不可微的,因此我們必須找另外一個損失函數來替代 $$ L(x)=\sum_{n}l\big(f(x^n),\hat{y}^n\big) $$ ![](https://i.imgur.com/o07uydy.jpg) 上圖是幾個不同的 $l$ 函數所呈現出來的結果 : * Ideal Loss : $l\big(f(x^n),\hat{y}^n\big)=\delta\big(g(x^n)\neq\hat{y}^n\big)=\delta\big(\hat{y}^nf(x^n)<0\big)$ * Square Loss : $l\big(f(x^n),\hat{y}^n\big)=\big(\hat{y}^nf(x^n)-1\big)^2$ * Sigmoid + Square Loss : $l\big(f(x^n),\hat{y}^n\big)=\big(\sigma(\hat{y}^nf(x^n))-1\big)^2$ * Sigmoid + Cross Entropy Loss : $l\big(f(x^n),\hat{y}^n\big)=\ln\big(1+e^{-\hat{y}^nf(x^n)}\big)$ * Hinge Loss : $l\big(f(x^n),\hat{y}^n\big)=\max\big(0,1-\hat{y}^nf(x^n)\big)$ Sigmoid + Square Loss 必然不會比 Sigmoid + Cross Entropy Loss 要來的好,因為在錯誤的地方梯度下降的不明顯,會導致優化停滯,這在之前的 " [Why can we use Squre Error in Logistic Regression ?](/q4S-cuYATemqKlV3UtCIEw) " 一文中便有解釋。 然而 Square Loss、Sigmoid + Cross Entropy Loss 以及 Hinge Loss 都是 Ideal Loss 的上限 ( Upper Bound ),理應都可以藉由優化這幾種 Loss function 來間接的優化 Ideal Loss,但最終會發現 Hinge Loss 會是這個問題的 Solution,原因如下 : 1. Square Loss 在分類正確的部分卻有著極高的 Loss,顯然不合理 2. 就 SVM 的概念發想來看,確實不需要 Sigmoid + Cross Entropy Loss 這種 Maximum Likelihood 這麼強的假設,而 Hinge Loss 不僅可以確保每一筆資料分類都可以分的好而且還可以保證決策邊界與資料間能夠存在一個 Margin ( 即上圖 penalty 的部分,之所以稱為 penalty 我想是因為在這個區間內的資料雖然 $\hat{y}^nf(x^n)$ 是同號,但卻不能確保在 Margin 之外,因此這一部分也要算進損失函數內 )[^1] [^1]: Hinge Loss Function 的出現其實是整個 SVM 優化的過程的產物。在 SVM 優化過程中,會將原始的 SVM 優化問題轉換成對偶 SVM 優化問題,在利用 Quadratic Programming ( QP,二次規劃) 來求解,而 Hinge Loss 便是在這一系列推導過程中所產生的優化式。 ## Linear SVM Optimization 一般的 SVM 優化過程中,會用到大量的數學概念,例如 : Lagrange Multiplier、Dual Space、Quadratic Programming....等概念來求最佳解。但是事實上, SVM 是可以同樣利用 Gradient Descent 來進行優化的。 $$ L(x)=\sum_{n}\max\big(0,1-\hat{y}^nf(x^n)\big)\\ \because \displaystyle{\frac{\partial l\big(f(x^n),\hat{y}^n\big)}{\partial w_i}}=\frac{\partial l\big(f(x^n),\hat{y}^n\big)}{\partial f(x^n)}\cdot\frac{\partial f(x^n)}{\partial w_i}\\ \Big(\because f(x^n)=w^Tx^n\Longrightarrow \frac{\partial f(x^n)}{\partial w_i}=x_i^n\Big)\\ \Longrightarrow \displaystyle{\frac{\partial l\big(f(x^n),\hat{y}^n\big)}{\partial w_i}}=\frac{\partial l\big(f(x^n),\hat{y}^n\big)}{\partial f(x^n)}\cdot x_i^n\\ \Big(\because \frac{\partial\max\big(0,1-\hat{y}^nf(x^n)\big)}{\partial f(x^n)}=\begin{cases}-\hat{y}^n&\mbox{if }\ \hat{y}^nf(x^n)<1\\0&\mbox{Otherwise}\end{cases}\Big)\\ \Longrightarrow \displaystyle{\frac{\partial l\big(f(x^n),\hat{y}^n\big)}{\partial w_i}}=\begin{cases}-\hat{y}^nx_i^n&\mbox{if }\ \hat{y}^nf(x^n)<1\\0&\mbox{Otherwise}\end{cases}\\ \therefore\frac{\partial L(f)}{\partial w_i}=\sum_{n}-\delta\big(\hat{y}^nf(x^n)<1\big)\cdot\hat{y}^nx_i^n\overset{Let}{=}\frac{\partial L(f)}{\partial w_i}=\sum_{n}-c^n(w)\cdot x_i^n $$ 一樣的,我們便可以利用梯度來進行權重的優化 $$ w_{i+1}=w_i-\eta\cdot\sum_{n}c^n(w)x_i^n $$ ## Typical SVM 李宏毅老師利用回溯的解釋方法,的確讓 SVM 頓時簡單了不少,而這也的確跟我們傳統典型的 SVM 是相通的,儘管表達方式略有不同。 如果我們把 Hinge Loss 調整一下 $$ L(f)=\sum_{n}\max\big(0,1-\hat{y}^nf(x^n)\big)\overset{Let}{=}\sum_{n}\epsilon^n\cdots(1)\\ \epsilon^n\geq 0\ and\ \epsilon^n\geq 1-\hat{y}^nf(x^n)\cdots(2) $$ 其實符合(1)(2)兩式的 $\epsilon^n$ 是不同的,但當我們要對 $L(f)$ 做優化時,(1)(2)兩式的解會變得相同。這有點像是一種「延拓」的概念,在數學上,當我們在某一個空間上求解有困難,我們可以將整個空間擴展,可能是同維度擴大,也可能是擴張到更高維度的空間來求解,之後再把解壓回原空間。 而(2)式可以寫成這樣 $$ \hat{y}^nf(x^n)\geq 1-\epsilon^n\cdots(3) $$ 從(3)式來看,這不就是我們熟悉的典型 SVM 嗎 ? ## Kernel Trick 在討論什麼是 Kernel 之前,我們要先了解一件事情,在 SVM 的優化過程中,我們不難發現,權重其實是由資料的線性組合而得 $$ w\longleftarrow w-\eta\cdot\sum_{n}c^n(W)x^n\\ w=\sum_{n}\alpha_nx^n=X\boldsymbol{\alpha} $$ 如果我們初始權重設為 0,那麼 $W$ 不管怎麼更新則都是由 $x^n$ 線性組合而得。在這樣的狀況下,如果我們選用的 Loss Function 是 Hinge Loss,那麼找出的 $\alpha_n$ 會是稀疏的 ( Sparse,大多為0 ),再換一個角度來看,我們的權重 $W$ 可以由少數幾筆資料 ( 對應 $\alpha_n$ 不為0的資料 ) 來表現,這些資料我們稱為 Support Vector,這也是為什麼這樣的分類器要叫做 Support Vector Machine 了。 有了這樣的概念之後,我們就可以來了解 Kernel 到底是什麼了 ### Case 1. Linear Separable 首先,我們先考慮一個線性可分的狀況 $$ w=\sum_{n}\alpha_nx^n=X\boldsymbol{\alpha}\\ \therefore f(x)=w^Tx\Longrightarrow f(x)=\boldsymbol{\alpha}^TX^Tx $$ <center> ![](https://i.imgur.com/yfBiPxK.jpg =400x) </center> $$ \therefore f(x)=\sum_{n}\alpha_n(x^n\cdot x)\overset{Let}{=}\sum_{n}\alpha_nK(x^n,x)\\ where\ K(x^n,x)=x^n\cdot x $$ ### Case 2. Non-Linear Separable 然而現實生活中,我們多半碰到的資料都不會是線性可分的,在非線性可分的資料上,通常會試著將資料進行 Feature Transorm ,投射到另外一個高維度空間 (Z-Space) 上後,使這些資料在高維度空間上線性可分,這樣便可以 apply 線性模型到我們的資料上了。 {%youtube 3liCbRZPrZA %} 假設 $\phi$ 是一個 Feature Transform ,而 $f$ 是一個 Z-Space 上的分類器 $$ \phi:X\longrightarrow Z\ ,\ f:Z\longrightarrow \mathbb{R}\\ \therefore f(\phi(x))=\sum_{n}\alpha_n(\phi(x^n)\cdot\phi(x))\\ F\overset{Let}{=}f\circ\phi\Longrightarrow F(x)=\sum_{n}\alpha_nK(x^n,x)\ where\ F:X\longrightarrow \mathbb{R} $$ (在這邊我選擇跟李宏毅講義的符號選用不同,這樣會比較清楚整個 Feature Transform 的定義域及對應域) 原本我們要從 X-Space 經過 Feature Transform 到 Z-Space 再做內積,但現在我們可以直接在 X-Space 上操作,而且不用知道這個 $\phi$ 究竟長得什麼樣子,只要我們能處理這個 Kernel Function $K$ 即可。 稍微總結一下,Kernel Trick 其實就是一個提高維度但也降低複雜度的手法。 ### Mercer’s theorem 然而從 Kernel Function 的定義來看 $$ K(x^n,x^m)=\phi(x^n)\cdot\phi(x^m) $$ 既然不會也不想知道 $\phi$ 那我們怎麼可以確定可以找到一個 $K$ 可以對應到某一個 $\phi$ 的內積 ? 這不就只是把未知的問題換到另一個未知的問題而已嗎 ? 其實必不然, Mercer's Theorem 給出了 Kernel Function 的條件 : $Mercer's\ Theorem:$ $If\ K\ is\ symmetric\ and\ continuous\ ,\ and\ the\ associated\ operator\ \mathbf{K}\ is\ positive\\ i.e.\ \langle\mathbf{K}\ u\ ,\ u\rangle \geq 0 for\ all\ u\ ,\ then\ we\ can\ write$ $$ K(x,t)=\sum\limits_{k=1}^{\infty}\lambda_k\phi_k(x)\phi_k(t) $$ $where\ the\ \lambda_k\ and\ \phi_k\ are\ the\ eigenvalues\ and\ eigenfunctions\ of\ K.$ 所以我們只要確認一個 Kernel Matrix $\boldsymbol{K}$ 是對稱且半正定 ,則這樣的 $K$ 必為 Kernel Function。 $$ \boldsymbol{K}\ is\ a\ n\times n\ matrix\ ,\ and\ \boldsymbol{K}_{ij}=K(x_i,x_j)\\ \Longrightarrow K(x_i,x_j)\ is\ a\ kernel\ function. $$ ### 常見的 Kernel Function ### 1. Linear Kernel : $K(x,x')=x^Tx'$ 這就單純在原空間上進行,這在上面我們有簡單推導過,這裡就不多做解釋。 #### 優點 * 簡單快速 * 具解釋性 #### 缺點 * 資料本身必須要是 Linear Separable ### 2. Polynomial Kernel : $K(x,x')=(\zeta+\gamma(x^Tx'))^Q$ 我們先考慮一個簡單的 Quadratic Polynomial Transformation $$ \phi_2(x)=(1,x_1,x_2,\cdots,x_d,x_1^2,x_2^2,\cdots,x_d^2,x_1x_2,x_1x_3,\cdots,x_{d-1}x_d)\\ \Longrightarrow K(x,x')=\phi(x)\phi(x')=1+\sum x_ix_i'+\sum x_ix_jx_i'x_j'\\ \Longrightarrow K(x,x')=1+x^Tx'+(x^Tx')^2 $$ 我們可以稍微調整一下 $\phi$ 的轉換型態 $$ \phi_2(x)=(1,\sqrt{2}x_1,\sqrt{2}x_2,\cdots,\sqrt{2}x_d,x_1^2,x_2^2,\cdots,x_d^2,x_1x_2,x_1x_3,\cdots,x_{d-1}x_d)\\ \Longrightarrow K(x,x')=1+2x^Tx'+(x^Tx')^2=(1+x^Tx')^2 $$ 也可以有更一般化的型態 $$ \phi_Q(x)=(1,\sqrt{2\gamma}x_1,\sqrt{2\gamma}x_2,\cdots,\sqrt{2\gamma}x_d,\gamma x_1^2,\gamma x_2^2,\cdots,\gamma x_d^2,\gamma x_1x_2,\gamma x_1x_3,\cdots,\gamma x_{d-1}x_d)\\ \Longrightarrow K(x,x')=1+2\gamma x^Tx'+\gamma^2(x^Tx')^2=(1+\gamma x^Tx')^2 $$ 上面就是我們很常使用的二次轉換的 Kernel,那我們還可以推廣到更高次方的多項是轉換上 $$ K(x,x')=(\zeta+\gamma x^Tx')^Q $$ #### 優點 * 資料並不一定要 Linear Separable * Q 的取值可以從資料的特性來判斷 #### 缺點 * $\begin{cases}K\rightarrow0&\mbox{if }\mid \zeta+\gamma x^Tx'\mid<1\\ K\rightarrow\infty&\mbox{if }\mid \zeta+\gamma x^Tx'\mid>1\end{cases}$ * 太多參數必須做選擇 ### 3. Gaussian Kernel ( RBF Kernel ) : $K(x,x')=e^{-\|x-x'\|^2}$ 不失一般性,我們先假設資料 Dimension=1 $$ K(x,x')=e^{-(x-x')^2}=e^{-x^2}\cdot e^{-x'^2}\cdot e^{2xx'}\\ =e^{-x^2}\cdot e^{-x'^2}\cdot \sum\limits_{i=0}^{\infty}\displaystyle{\frac{(2xx')^i}{i!}}\\ =\sum\limits_{i=0}^{\infty}e^{-x^2}\cdot e^{-x'^2}\cdot\sqrt{\frac{2^i}{i!}}\cdot\sqrt{\frac{2^i}{i!}}\cdot x^i\cdot x'^i=\phi(x)\phi(x')\\ where\ \phi(x)=e^{-x^2}\cdot (1,\sqrt{\frac{2^1}{1!}}x_1,\sqrt{\frac{2^2}{2!}}x_2,\cdots)\\ \overset{Generalized}{\Longrightarrow}Gaussian\ Kernel=K(x,x')=e^{-\|x-x'\|^2} $$ 利用 RBF Kernel 形同我們可以計算一個擴展到無限多維的轉換 $\phi_{\infty}$ 再內積的結果。 #### 優點 * Powerful Kernel * K 值通常不會太大 * 僅一個參數 $\gamma$ #### 缺點 * 解釋性低 * 計算速度慢 * Overfitting 的機會很大 ### 4. Sigmoid Kernel : $K(x,x')=\tanh(x^Tx')$ 前面三種,結合了李宏毅及林軒田的上課重點所整理的部分,但李宏毅的課程中有多提到 Sigmoid Kernel,我想重點就是在將 SVM 與 Neural Network 做一個概念上的連結。 $$ K(x,x')=\tanh(x^Tx')\\ f(x)=\sum_{n}\alpha_n\tanh(x^n\cdot x) $$ <center> ![](https://i.imgur.com/EJmYwY6.png =400x) </center> 如果將一個 NN 的權重用資料 ( $x^n$ ) 來替代,那麼上式便可以視為一個單層有 $n$ 個 Neurons 的 NN。從很多面向來看,SVM 與 NN 的運作其實是蠻接近的。 我們知道,NN 的隱藏層其實就是一個 Feature Transform 的函數,對應到 SVM 中就是 $\phi$ 的工作。而 NN 的最後一層輸出層就是一個分類器,在 SVM 裡面就是超平面。 不管是 NN 還是 SVM ,它們的工作就是轉換特徵然後找出分類器,就意義上來看,其實並沒有太大的差異。 ![](https://i.imgur.com/e5pIFhC.png) ## 總結 雖然說 SVM 簡單來說就是 Hinge Loss 與 Kernel Trick 的結合,但這兩者的結合卻也造就出 SVM 的許多有趣的性質。 因為 Hinge Loss ,所有的權重都可由少數幾個資料來表示,也因為如此,SVM 模型有很不錯的抗 Outlier 能力,因為除了這些 Support Vector 之外,其他的資料點基本上並不會影響到整個模型。 而 Kernel Trick 成功的避開了「該怎麼選擇特徵轉換」的困境,並且成功的大幅降低演算複雜度。 綜合以上, SVM 可以用少數資料構造出一個穩定且複雜度不高的模型,在神經網路盛行前,的確是非常強大的演算法。

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password
    or
    Sign in via Google Sign in via Facebook Sign in via X(Twitter) Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    By signing in, you agree to our terms of service.

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully