funnn
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # 人工智慧導論 ![image](https://hackmd.io/_uploads/SygZULSL_6.png) Q:何謂regression 問題、模型? Regression問題是預測一個連續值的問題。 eg. 預測房價、氣溫、銷售等都是回歸問題。回歸模型試圖找出輸入特徵和目標值之間的關係,從而使預測值與實際值之間的誤差最小化。在深度學習中,回歸模型可能是一個神經網絡,最後一層沒有激活函數或使用線性激活函數。 Q:何謂classification 問題、模型? Classification問題是預測一個離散標籤的問題。 eg. 判定一張圖片是貓還是狗、或者一封郵件是垃圾郵件還是正常郵件。分類模型試圖根據輸入特徵將每個實例分配到一個或多個類別。在深度學習中,分類模型經常使用softmax激活函數在最後一層,以得到每個類別的概率。 Q: 資料如何輸入神經網路中? Q: 如何提高神經網路正確性? => 透過FP(Forword Propagation,前項傳遞)進行訓練,可以提升神經網路(NN)正確性 ![](https://hackmd.io/_uploads/rJ9_QqFe6.png) [基本概念] ![](https://hackmd.io/_uploads/BkGkEqKxa.png) - K-fold Cross-Validation 在 K-Fold 的方法中我們會將資料切分為 K 等份,K 是由我們自由調控的,以下圖為例:假設我們設定 K=10,也就是將訓練集切割為十等份。這意味著相同的模型要訓練十次,每一次的訓練都會從這十等份挑選其中九等份作為訓練資料,剩下一等份未參與訓練並作為驗證集。因此訓練十回將會有十個不同驗證集的 Error,這個 Error 通常我們會稱作 loss 也就是模型評估方式。模型評估方式有很多種,以回歸問題來說就有 MSE、MAE、RMSE…等。最終把這十次的 loss 加總起來取平均就可以當成最終結果。透過這種方式,不同分組訓練的結果進行平均來減少方差,因此模型的性能對數據的劃分就不會那麼敏感。 ![](https://hackmd.io/_uploads/ByaiYrFxT.png) - 訓練神經網路後續步驟 1. 初始化NN的weight、bias 2. 輸入多筆資料進去model,獲得結果(每輸入一筆x,經過運算可得到一筆誤差) 3. ![](https://hackmd.io/_uploads/SyQIN5Fe6.png) 4. 若誤差很大,表示weight、bias配置須要修正 - 訓練NN過程中,是不斷的修正所有weight matrix 和 bias - 是為了找到「可讓總誤差(指Loss function)的值極小化」的最佳權重配置 5. 當找到一組最佳權重配置的模型後,就可以用來實際預測答案。稱為**已訓練好的模型(Trained model)**。 ![](https://hackmd.io/_uploads/HkP7LcFl6.png) ## 用Loss function來計算誤差(error) => 當Loss function的值很大時,代表模型的預測值和正確值間的誤差很大,就需要修正weight,直到Loss function的值極小化為止。 ![](https://hackmd.io/_uploads/HJgwD9Fg6.png) ### Loss function 函數 | model\problem | Regression | Classification | | -------- | -------- | -------- | | | MSE (均方誤差,Mean square error) | Cross Entropy(交叉熵) | | | MAE (平均絕對值誤差,Mean absolute error) | | - MSE (Mean square error,均方誤差): -> 通常用在處理regression problem ![](https://hackmd.io/_uploads/BJl7u9FlT.png) ![](https://hackmd.io/_uploads/HJjL_9Fxp.png) ``` import numpy as np def mean_square_error(y,y_hat): ##y_hat是指正確的(驗證集) return (1.0/2.0)*np.sum(np.square(y-y_hat)) a = np.random.randint(0,10,5) b = np.random.randint(0,10,5) print(mean_square_error(a,b)) ``` ### Cross Entropy 交叉熵 -> 通常用於處理Classification problem => 原本用於表達原子在空間當中的分布狀況,當原子分布越混亂,Entropy越高(越多不確定性存在)。 - Cross Entropy 主要用於計算「預測出來的機率分布」與「真實機率分布」之間的總誤差 - 當預測出的機率機率分布與真實分布愈接近,Cross Entropy會相對較小 ![](https://hackmd.io/_uploads/Sy0P39Fea.png) ![](https://hackmd.io/_uploads/S1y325txT.png) ![](https://hackmd.io/_uploads/ryC63ctlT.png) ``` import numpy as np def cross_entropy(y,y_hat): #y_hat是正確的(驗證集) return -1*np.sum(y_hat*np.log(y+1e-7)) #因ln 0 無法計算,所以加上一個極小值10**-7 a = np.random.random(5) b = np.random.randint(0,2,5) print(cross_entropy(a,b)) ``` ## 修正weight的方法: 梯度下降(Gradient Descent) =>隨機找一組參數,並計算周圍是否有更好的參數組合,若有就移動過去。 GD是最佳化理論裡面的一個一階找最佳解的一種方法,主要是希望用梯度下降法找到函數的局部最小值,因為梯度的方向是走向局部最大的方向,所以在**梯度下降法中是往梯度的反方向走**。 ![](https://hackmd.io/_uploads/SJziejFx6.png) [缺點]:有可能找到的點並非真正最好的點,而是所謂的Local minimum(區域最小值),解決方法之一就是多由不同起點來試看看(靠運氣) ![](https://hackmd.io/_uploads/H1xveoYl6.png) 可參考:https://chih-sheng-huang821.medium.com/%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E5%9F%BA%E7%A4%8E%E6%95%B8%E5%AD%B8-%E4%BA%8C-%E6%A2%AF%E5%BA%A6%E4%B8%8B%E9%99%8D%E6%B3%95-gradient-descent-406e1fd001f ### Gradient Descent 步驟: ![](https://hackmd.io/_uploads/B1DvQitea.png) 2. 利用斜率,當作指引的方向(指引到低點) => 由一階導函數切線斜率來決定方向 ![](https://hackmd.io/_uploads/SkKISjFea.png) 3. ![](https://hackmd.io/_uploads/ryh5rsYxT.png) 4. ![](https://hackmd.io/_uploads/By7AriKg6.png) [問題] 1. ![](https://hackmd.io/_uploads/Hy_M8jKlp.png) 2. ![](https://hackmd.io/_uploads/BJsEIjYlp.png) 3. ![](https://hackmd.io/_uploads/HkcI8jYlp.png) 4. ![](https://hackmd.io/_uploads/HJZYLsKla.png) ## 計算梯度 ![](https://hackmd.io/_uploads/SyTK_itga.png) ![](https://hackmd.io/_uploads/ryPodstx6.png) ![](https://hackmd.io/_uploads/BkxJKsFxa.png) ![](https://hackmd.io/_uploads/B1yYFoKlp.png) ![](https://hackmd.io/_uploads/HJ1r9otlp.png) ![](https://hackmd.io/_uploads/rJrUqiKxp.png) ![](https://hackmd.io/_uploads/rJyucjtgT.png) ![](https://hackmd.io/_uploads/S1cy3stg6.png) ![](https://hackmd.io/_uploads/HyaHT09g6.png) ![](https://hackmd.io/_uploads/B14taR5xp.png) ![](https://hackmd.io/_uploads/Bk3FpA9e6.png) ![](https://hackmd.io/_uploads/Sk9cTCcxa.png) ![](https://hackmd.io/_uploads/B1Vhp0qeT.png) ![](https://hackmd.io/_uploads/rJcaaCceT.png) ![](https://hackmd.io/_uploads/BJmg0A9l6.png) ![](https://hackmd.io/_uploads/B1eLb0A5x6.png) ![](https://hackmd.io/_uploads/SJHZA0qx6.png) ![](https://hackmd.io/_uploads/S1B-00qgp.png) ![](https://hackmd.io/_uploads/r1SbAC9ea.png) ![](https://hackmd.io/_uploads/BJS-AA9g6.png) ![](https://hackmd.io/_uploads/B1r-00cxT.png) ![](https://hackmd.io/_uploads/rkSbA0cep.png) ![](https://hackmd.io/_uploads/ByBWCC5e6.png) ![](https://hackmd.io/_uploads/H1LZ0Ccla.png) ![](https://hackmd.io/_uploads/S1IZRR5xp.png) ![](https://hackmd.io/_uploads/HkeSWAA5gT.png) ![](https://hackmd.io/_uploads/BJ8b0Aqxp.png) ![](https://hackmd.io/_uploads/BkerZCR9e6.png) ![](https://hackmd.io/_uploads/SkIWR05gp.png) ![](https://hackmd.io/_uploads/Sk64J1jxT.png) SGD:https://chih-sheng-huang821.medium.com/%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E5%9F%BA%E7%A4%8E%E6%95%B8%E5%AD%B8-%E4%B8%89-%E6%A2%AF%E5%BA%A6%E6%9C%80%E4%BD%B3%E8%A7%A3%E7%9B%B8%E9%97%9C%E7%AE%97%E6%B3%95-gradient-descent-optimization-algorithms-b61ed1478bd7 ![](https://hackmd.io/_uploads/ryFS11oga.png) ## Learning rate來決定 行走步伐 大小 ![](https://hackmd.io/_uploads/BJkJe1ixT.png) ## 優化Learning rate的演算法(Optimization Algorithm) - SGD - Momentum - Adagrad - RMSProp - **Adam (最常用)** ## CNN (Convolutional Neural Network,卷積神經網路) 用於模擬人類大腦視覺系統的神經網路模型 ## RNN (Recurrent Neural Network,循環神經網路) > 為了處理sequential data而設計出來的模型,他可以記住過去的資訊,並用來處理新的事件。 > 但只能記住短期記憶。長期會忘 ### 不同類型的序列建模: - 一對一(Many-to-one) ![](https://hackmd.io/_uploads/Syv3xd0za.png) - 一對多(One-to-many) ![](https://hackmd.io/_uploads/rJ8AldCMT.png) - 多對多(Many-to-many) ![](https://hackmd.io/_uploads/Sy3lbuCG6.png) ![](https://hackmd.io/_uploads/HkUBb_Czp.png) #### 將RNN 展開 ![](https://hackmd.io/_uploads/rJujZ_Cfp.png) ![](https://hackmd.io/_uploads/HkblMORGa.png) ![](https://hackmd.io/_uploads/HkYUM_CM6.png) ![](https://hackmd.io/_uploads/HymszuAMp.png) #### RNN 不一定要從hidden layer 做recurrent,也可以有其他方法 ![](https://hackmd.io/_uploads/S1j2GdCMp.png) #### RNN LSTM比較 ![](https://hackmd.io/_uploads/Skq93_CfT.png) ![](https://hackmd.io/_uploads/r1SnnO0Gp.png) - RNN 的下一個輸出會考慮上一層,所以很適合處理時間與序列的問題,但會遇到以下困難: 1. 梯度消失 2. 梯度爆炸 #### 梯度下降/ 消失 / 爆炸 1. 梯度下降(Gradient Descent) - 如果變數是線性關係,Gradient Descent 是一種找最小 Loss Function 的工具 - 通常起始點是隨機選一個斜率開始使用一個樣本資料去更新,稱隨機梯度下降法(Stochastic Gradient Descent)。但可能會卡在局部最小值(Local Minimum)而非全域最小值(Global Minimum),解決方案如下: 1. 批次梯度下降(Batch Gradient Descent):但每次更新都要用一個 Batch 的資料 2. 牛頓法、momentum 或是 Adam 3. 或是根據問題調整 Learning rate 、Update 次數等 #### 梯度消失(Gradient Vanishing) ![image](https://hackmd.io/_uploads/Syg5fZBLO6.png) - 針對上面這個DNN: 1. 本層輸出 = f(上層輸出 * 本層權重 + Bias) 2. 從輸入到輸出的前向傳導公式 ![image](https://hackmd.io/_uploads/H1pEbBU_T.png) 3. 我們如果使用標準化初始的權重(w1)對本層權重求 Gradient 可以看出來是從輸出到輸入層反向求出。而這種從後面輸出往前找、往前調整的動作,叫做 Back propagation ,當 layer 越多參數越多,越難找到至關重要的調整處(Activation function 的導數也是 0~1 之間的數,在反向傳播中,當梯度由後往前傳時,其連乘後結果會變的很小,最後變為零),稱為梯度消失 - 解決方案: 1. 如設計較好的 Activation function ,如 Relu 2. 設計一個較好的 Gradient descent Algorithm #### 梯度爆炸(Gradient Exploding) > 如果一開始的 w 很大,大到後面的Activation function 的導數比 1 更大,這樣結果也會很大,學習不穩定,找不到最好的參數 #### 所以 RNN… - 如果輸入的資料太長,就會記不住前面的資料,導致梯度消失 - 如果資料太長,權重一不小心沒調好就會爆掉,導致梯度爆炸 ## LSTM (Long short-term memory) >可以克服RNN難以保持長期記憶的問題 > >[關鍵技術]: cell state (細胞狀態) > >![](https://hackmd.io/_uploads/ByD1auRM6.png) > >![](https://hackmd.io/_uploads/By4ha_AMT.png) >有三種gate: >1. Forget gate >2. Input gate >3. Output gate > > >![](https://hackmd.io/_uploads/S1siROCfa.png) > >![](https://hackmd.io/_uploads/By5lJKRfT.png) > >![](https://hackmd.io/_uploads/SJH-1K0G6.png) > >![](https://hackmd.io/_uploads/BybGyF0z6.png) ## GRU(Gate Recurrent Unit) ![](https://hackmd.io/_uploads/SJRSyFCz6.png) >過程: ![](https://hackmd.io/_uploads/rympJKRf6.png) > >![](https://hackmd.io/_uploads/ryB1gYAzp.png) > >![](https://hackmd.io/_uploads/BJ4ggt0fa.png) > >![](https://hackmd.io/_uploads/rkf-etRfp.png) > >![](https://hackmd.io/_uploads/ByXfgt0Mp.png) > >![](https://hackmd.io/_uploads/BJRGxY0f6.png) > >![](https://hackmd.io/_uploads/BkFQlKRMp.png) > >![](https://hackmd.io/_uploads/H14VxK0Ga.png) > >![](https://hackmd.io/_uploads/BygHetAGT.png) --- ## GAN (對抗式生成網路,Generative Adversial Network) > 主要用途: > 1. 生成圖片 > 2. 圖像轉譯(image-to-image translation) > 3. 顏色修補 > ![a](https://hackmd.io/_uploads/SJA0JZMET.jpg) #### [原理] ![image](https://hackmd.io/_uploads/r1S1TgMNT.png) ![image](https://hackmd.io/_uploads/HyD96xzNp.png) #### 訓練步驟 1. 先訓練 Discriminator,而Generator先不動 ![image](https://hackmd.io/_uploads/HkFiEbGE6.png) ![image](https://hackmd.io/_uploads/rJgGAxMEa.png) 2. 再訓練 Generator ![image](https://hackmd.io/_uploads/H1IAVbz4T.png) ![image](https://hackmd.io/_uploads/r1ewCxGNp.png) 3. 以此類推,反覆訓練。 ![image](https://hackmd.io/_uploads/H1a-H-f4a.png) > - 直到Generator的假樣本(生成出來的)與訓練集的真樣本(真實資料)完全無法區別 >- Discriminator 只能隨機猜測sample是True or False (50% accuracy) #### GAN 基礎 ![image](https://hackmd.io/_uploads/H1x44WM46.png) ![image](https://hackmd.io/_uploads/Bk24V-M4a.png) ![image](https://hackmd.io/_uploads/SyuHNZMNp.png) > GAN 的基礎 – 訓練過程: • 在高維度非凸 (nonconvex) 中,要訓練好 GAN 並非易事 • 訓練 GAN,只能不斷嘗試錯誤 ![image](https://hackmd.io/_uploads/SJU5NWGEa.png) --- GAN學習資源: 1. https://github.com/GANs-in-Action/gans-in-action 2. https://mgubaidullin.github.io/deeplearning4j-docs/generative-adversarial-network.html 3. https://developers.google.com/machine-learning/gan/gan_structure --- [GAN演進] ### Autoencoder (自編碼器) > 是最早的Generator >![image](https://hackmd.io/_uploads/SkWjlZzEa.png) https://jason-chen-1992.weebly.com/home/-autoencoder ![image](https://hackmd.io/_uploads/BkNvW-zEp.png) ### VAE (變分自編碼器, Varational AutoEncoder) ![image](https://hackmd.io/_uploads/SkE3--fE6.png) >VAE 是一種基於 Bayesian machine learning 的技術, 試著找出合適的規則 (平均值與標準差) 來界定資料 於潛在空間的分布區域 • 使用深度學習套件 – Keras • 利用 Keras Functional API 來建構 VAE ![image](https://hackmd.io/_uploads/rk1i7WGNa.png) ### DCGAN (深度卷積生成對抗網絡,Deep Convolutional Generative Adversarial Networks) > 結合了GAN的基本原理與深度卷積神經網絡(CNN)。DCGAN通過改進GAN的架構,提高了訓練穩定性和生成圖像的質量。 > ![image](https://hackmd.io/_uploads/HJGjB-fEa.png) ![image](https://hackmd.io/_uploads/ryS3HWM4T.png) [結合CNN遇到的問題&解法] >![image](https://hackmd.io/_uploads/Hy7gLbzV6.png) > 解法: DCGAN在Generator的所有層中使用ReLU激活函數,並在最後一層使用tanh。在Discriminator中,則使用LeakyReLU激活函數。有助於提高模型的效能 ![image](https://hackmd.io/_uploads/B17ZLbGNa.png) ![image](https://hackmd.io/_uploads/BJmfIWMNa.png) ![image](https://hackmd.io/_uploads/HJPVIbGV6.png) >DCGAN在生成器和判別器的大部分層中引入了批量標準化,這有助於緩解訓練過程中的梯度消失和梯度爆炸問題,從而促進模型的穩定訓 ![image](https://hackmd.io/_uploads/r1Md8ZzNT.png) > 這邊的"逆向操作"指的是轉置卷積(Transposed Convolution)。 > > DCGAN特別強調使用轉置卷積來進行上採樣(upsampling)和改善生成圖像的質量。DCGAN的設計中去除了任何池化(Pooling)層,並且全面採用了批量標準化和轉置卷積層來提高模型性能和穩定訓練過程。 > > 使用一系列卷積層(在這裡實際上是反卷積層或轉置卷積層)來將這個噪聲向量轉化成圖像。 每個轉置卷積層逐漸將特徵圖放大,形成最終的圖像 ![image](https://hackmd.io/_uploads/BkTdIWMET.png) > 判別器通過卷積層逐漸降低特徵圖的空間維度,最終輸出一個單一的預測值,表示輸入圖像是真實還是假的。 ### GAN 面臨挑戰 ![image](https://hackmd.io/_uploads/SJM9qbG4a.png) ![image](https://hackmd.io/_uploads/H1W0cbMEp.png) ![image](https://hackmd.io/_uploads/HkM1sbfNp.png) ### CGAN (Conditional GAN) > 在訓練中,利用附加的條件資訊來制約 Generator and Discriminator • 條件資訊可以是任意形式: • 類別標籤 • 特徵描述 • 說明文件 ### Cycle GAN ![image](https://hackmd.io/_uploads/S1UVnZz4T.png) https://github.com/phillipi/pix2pix

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully