flag23311033
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note No publishing access yet

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.

      Your account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

      Your team account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

      Explore these features while you wait
      Complete general settings
      Bookmark and like published notes
      Write a few more notes
      Complete general settings
      Write a few more notes
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note No publishing access yet

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.

    Your account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

    Your team account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

    Explore these features while you wait
    Complete general settings
    Bookmark and like published notes
    Write a few more notes
    Complete general settings
    Write a few more notes
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # 【MNIST 多重手寫數字即時辨識: AutoKeras 結合 OpenCV,深度學習實用化,樹莓派也能跑!】 ![](https://i.imgur.com/q8dYZb4.jpg) 在我們過去分享的幾篇文章中,都展示了 AutoKeras 如何能讓使用者在幾乎沒有深度學習專業知識的前提下訓練出出色的神經網路模型,而 AutoKeras 在視覺辨識方面的表現尤其強大。不過,有想過這類模型要如何應用在實際情境,例如進行即時影像辨識嗎? 正如書中介紹的,MNIST 手寫數字資料集向來被視為深度學習領域的「Hello World」入門範例。但正因它是黑白手寫數字,因此訓練資料單純且容易取得、模型訓練上較快,影像的預處理上也不會太難 (這其實已經很類似 OCR 光學辨識),很容易就可做到在同一個畫面中偵測多重數字的功能。 ![](https://i.imgur.com/VU3UPzv.jpg) 為了提高模型的預測速度,我們也會將模型先轉換成 Tensorflow Lite 格式,這是針對行動裝置與微控制器開發的 Tensorflow 版本,在標準 Tensorflow 套件中有內建。當然,你也可以使用純 TF Lite 執行環境 (runtime),這使得你也能在沒有安裝完整 Tensorflow 的平台或樹莓派上執行本篇的範例 (見後說明)。 以下的測試環境為 AutoKeras 1.0.16 post1 + Tensorflow 2.5.2 + OpenCV 4.5.5。由於程式運行的性質,我們是在原生 Python 3.9 64-bit 環境執行。 快速安裝指令: `pip3 install --upgrade autokeras tensorflow==2.5.2 numpy opencv-python` **訓練 AutoKeras 模型並產生 TF Lite 檔** 第一步便是訓練一個 MNIST 辨識模型,並把它轉為 TF Lite 格式。這部分其實相對簡單,因為我們只要用 AutoKeras 就能快速產生一個高效能的 CNN 模型: ``` TF_LITE_MODEL = './mnist.tflite' # 要儲存的 TF Lite 模型檔名 import autokeras as ak import tensorflow as tf from tensorflow.keras.datasets import mnist # 載入 MNIST 資料集 (x_train, y_train), (x_test, y_test) = mnist.load_data() # 訓練 AutoKeras 模型 clf = ak.ImageClassifier(max_trials=1, overwrite=True) clf.fit(x_train, y_train) # 評估模型預測效能 loss, accuracy = clf.evaluate(x_test, y_test) print(f'\nPrediction loss: {loss:.3f}, accurcy: {accuracy*100:.3f}%\n') # 匯出 Keras 模型並檢視架構摘要 model = clf.export_model() model.summary() # 將 Keras 模型轉為 TF Lite converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # 儲存 TF Lite 模型 with open(TF_LITE_MODEL, 'wb') as f: f.write(tflite_model) ``` 筆者得到的輸出結果如下 (使用 GPU 因此僅花 6 至 7 分鐘): ``` Trial 1 Complete [00h 04m 14s] val_loss: 0.03911824896931648 Best val_loss So Far: 0.03911824896931648 Total elapsed time: 00h 04m 14s Epoch 1/21 1875/1875 [==============================] - 8s 4ms/step - loss: 0.1584 - accuracy: 0.9513 Epoch 2/21 1875/1875 [==============================] - 8s 4ms/step - loss: 0.0735 - accuracy: 0.9778 Epoch 3/21 1875/1875 [==============================] - 8s 4ms/step - loss: 0.0616 - accuracy: 0.9809 (...中略...) Epoch 19/21 1875/1875 [==============================] - 8s 4ms/step - loss: 0.0213 - accuracy: 0.9932 Epoch 20/21 1875/1875 [==============================] - 9s 5ms/step - loss: 0.0226 - accuracy: 0.9927 Epoch 21/21 1875/1875 [==============================] - 8s 4ms/step - loss: 0.0197 - accuracy: 0.9938 313/313 [==============================] - 1s 3ms/step - loss: 0.0387 - accuracy: 0.9897 Prediction loss: 0.039, accurcy: 98.970% Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28)] 0 _________________________________________________________________ cast_to_float32 (CastToFloat (None, 28, 28) 0 _________________________________________________________________ expand_last_dim (ExpandLastD (None, 28, 28, 1) 0 _________________________________________________________________ normalization (Normalization (None, 28, 28, 1) 3 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 24, 64) 18496 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 12, 12, 64) 0 _________________________________________________________________ dropout (Dropout) (None, 12, 12, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 9216) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 9216) 0 _________________________________________________________________ dense (Dense) (None, 10) 92170 _________________________________________________________________ classification_head_1 (Softm (None, 10) 0 ================================================================= Total params: 110,989 Trainable params: 110,986 Non-trainable params: 3 _________________________________________________________________ ``` 可見這模型對測試集的預測準確率為 98.97%。而由於這是個簡單的 CNN 模型,因此轉換為 TF Lite 後僅有 436 KB 大小。 Keras 模型轉換到 TF Lite 的過程,其實還有一點要注意,就是模型會進行量化 (quantization)、將使用的資料從浮點數轉成整數來加快預測速度和縮減大小。還好 MNIST 圖像的資料本來就是 uint8,後面用 OpenCV 讀取的影像也是,因此無須做額外處置。 **OpenCV 的預處理** 第二步就是讀取 TF Lite 模型,用 OpenCV 從攝影機取得影像和做預處理,然後將可能是數字的區域抓出來、用模型預測看看。 最直覺的方式,是訂出一個特定大小的方框,然後讓這個方框在畫面中移動 (滑動窗格),並不斷對方框中的東西做預測,直到預測結果顯示很有可能是數字再保留結果。就算我們只讓畫面中出現數字、其餘都是白紙,這樣做還是會有個問題:數字可能有大有小,這意味著我們得用不同大小的窗格重複掃描畫面,還得去掉可能重疊的判定結果。 幸好,由於訓練集的數字都是白字黑底,我們可以用 OpenCV 快速做些預處理來圈出可能的數字區域。下面我們來簡單解釋這種預處理是如何進行的。 下面是一張原始影像,有我們用簽字筆寫在紙上的 10 個數字: ![](https://i.imgur.com/BCMSP7L.jpg) 由於 MNIST 資料集的性質,使用簽字筆才能得到明顯的字跡,而且應避免寫得太細長或歪斜,才能取得正確的辨識結果。 預處理的第一步是將影像轉為灰階: ![](https://i.imgur.com/5ZUotGU.jpg) 接著做二值化 (binarization),也就是將所有像素轉為非黑即白: ![](https://i.imgur.com/1rJIAnQ.jpg) 在做二值化時,可以設一個門檻決定灰階畫面中多亮或多暗的像素要轉為白或黑,不過我們也可以讓 OpenCV 自己判定 (見後面程式)。 再來我們多做一個叫做「形態學閉運算」(morphological closing) 的操作: ![](https://i.imgur.com/p7Fbueh.jpg) 形態學閉運算其實由兩部分組成,在這裡是對白色的部分先做「膨脹」(dilation) 再做「腐蝕」(erosion): 1. 讓白色區域往外均勻擴張。 2. 再從白色區域的邊緣往內均勻減少之。 這麼做的實際效果,就是讓白字部分的邊緣更清楚些、去掉字跡中微小的黑色雜點 (它們會在周圍的白色擴張時被蓋過),對模型判讀上會較為有利。 接下來我們就能用 OpenCV 來圈出畫面上這些形體的輪廓 (contour): ![](https://i.imgur.com/Q2ycKlJ.jpg) 輪廓是個很好用的功能,這使我們不需實作滑動窗格,就能快速圈出可能的數字、也不需要考慮重疊的判定窗格。 當然,畫面邊緣的一些東西 (例如圖中的紙孔) 同樣會被 OpenCV 框起來。所以我們會在程式中要它忽略位在邊緣的任何輪廓框。 最後我們將這些框做一些處理,以便把影像送進模型做預測: 1. 由於我們寫的數字可能長寬不一樣,輪廓不是正方形,而 MNIST 圖像全部是 28 x 28、數字也位於正中央。所以把數字區域擷取出來後,得在邊緣填充一些黑邊,讓它比較接近原始的訓練圖像。 2. 接著把影像縮放到 28 x 28。 下面的結果便加上了模型的預測值: ![](https://i.imgur.com/1KOa1pa.jpg) 可見透過以上的預處理步驟,我們得以從畫面中找出可能是數字的區域,並讓模型做判讀。(當然,我們這個模型也無法判斷某物「不是數字」。) **搭配 OpenCV 進行即時偵測** 下面的程式就運用以上步驟,從 webcam 的即時影像來偵測手寫數字: ``` TF_LITE_MODEL = './mnist.tflite' # TF Lite 模型 IMG_W = 640 # 影像寬度 IMG_H = 480 # 影像高度 IMG_BORDER = 40 # 要忽略的影像邊緣寬度 DETECT_THRESHOLD = 0.7 # 顯示數字的判定機率門檻 (70%) CONTOUR_COLOR = (0, 255, 255) # 數字框的顏色 (BGR) LABEL_COLOR = (255, 255, 0) # 數字框的標籤顏色 (BGR) LABEL_SIZE = 0.7 # 數字框標籤的字體大小 (70%) import cv2 import numpy as np # 載入 TF Lite 模型 import tensorflow as tf interpreter = tf.lite.Interpreter(model_path=TF_LITE_MODEL) # 取得模型輸出入節點的資訊 interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # 取得輸入節點的影像大小 INPUT_SHAPE = input_details[0]['shape'][1:3] # 做形態學閉運算用的 kernel MORPH_KERNEL = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5)) # 從 webcam 擷取影像 cap = cv2.VideoCapture(0) cap.set(cv2.CAP_PROP_FRAME_WIDTH, IMG_W) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, IMG_H) while cap.isOpened(): # 取得一個畫格 success, frame = cap.read() # 將畫格轉為灰階 frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # 影像二值化 (轉為黑白, 門檻自動判定) _, frame_binary = cv2.threshold(frame_gray, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU) # 用形態學閉運算去掉黑噪點 frame_binary = cv2.morphologyEx(frame_binary, cv2.MORPH_CLOSE, MORPH_KERNEL) # 將畫面中的形體框起來 contours, _ = cv2.findContours(frame_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 走訪所有框 for contour in contours: x, y, w, h = cv2.boundingRect(contour) # 忽略跟畫面邊界重疊的框 if x < IMG_BORDER or x + w > (IMG_W - 1) - IMG_BORDER or y < IMG_BORDER or y + h > (IMG_H - 1) - IMG_BORDER: continue # 忽略太大或太小的框 if w < INPUT_SHAPE[0] // 2 or h < INPUT_SHAPE[1] // 2 or w > IMG_W // 2 or h > IMG_H // 2: continue # 用框擷取出目標影像區域 img = frame_binary[y: y + h, x: x + w] # 在影像四周加一些黑色空白並使之成為正方形 r = max(w, h) y_pad = ((w - h) // 2 if w > h else 0) + r // 5 x_pad = ((h - w) // 2 if h > w else 0) + r // 5 img = cv2.copyMakeBorder(img, top=y_pad, bottom=y_pad, left=x_pad, right=x_pad, borderType=cv2.BORDER_CONSTANT, value=(0, 0, 0)) # 將影像縮小到符合模型輸入形狀 img = cv2.resize(img, INPUT_SHAPE, interpolation=cv2.INTER_AREA) # 做出一筆預測 interpreter.set_tensor(input_details[0]['index'], np.expand_dims(img, axis=0)) interpreter.invoke() predicted = interpreter.get_tensor(output_details[0]['index']).flatten() # 取得預測標籤及其判定機率 label = predicted.argmax(axis=0) prob = predicted[label] # 若機率未過門檻則忽略 if prob < DETECT_THRESHOLD: continue # 對原始畫格在數字區域畫框和顯示標籤 cv2.rectangle(frame, (x, y), (x + w, y + h), CONTOUR_COLOR, 2) cv2.putText(frame, str(label), (x + w // 5, y - h // 5), cv2.FONT_HERSHEY_COMPLEX, LABEL_SIZE, LABEL_COLOR, 2) # 顯示畫格 cv2.imshow('MNIST Live Detection', frame) # 若使用者按下 'q' 就結束 if cv2.waitKey(1) == ord('q'): break cap.release() cv2.destroyAllWindows() ``` 這回我們直接將輪廓框和判讀結果畫在原始畫面中,而當我們直接在紙上用簽字筆寫字時,模型也能立即抓到它: <iframe width="560" height="315" src="https://www.youtube.com/embed/j9VTobscRfs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> (https://youtu.be/j9VTobscRfs) **在樹莓派運行本範例** 其實你也能藉由 TF Lite runtime 來在本機或樹莓派之類的裝置上運行上述程式。到 https://github.com/google-coral/pycoral/releases/ 下載符合你平台的 wheel 封裝檔: `pip3 install <wheel 路徑與名稱>` 例如,若使用樹莓派 4 並安裝去年底最新的 Bullseye 版 Raspberry Pi OS,你得下載 `tflite_runtime-2.5.0.post1-cp39-cp39-linux_armv7l.whl` 若安裝 Buster 版 (包括 Raspberry Pi Legacy OS),則得使用 `tflite_runtime-2.5.0.post1-cp37-cp37m-linux_armv7l.whl` 接著把前面的即時偵測程式的這兩行 ``` import tensorflow as tf interpreter = tf.lite.Interpreter(model_path=TF_LITE_MODEL) ``` 換成以下兩行: ``` from tflite_runtime.interpreter import Interpreter interpreter = Interpreter(model_path=TF_LITE_MODEL) ``` 這麼一來其餘程式都不必更動,你就能在不安裝完整 Tensorflow (包括目前無法安裝 Tensorflow 2 的樹莓派 OS) 的前提下載入同樣的 TF Lite 模型 (記得把產生的 mnist.tflite 檔也複製到樹莓派內): ![](https://i.imgur.com/6pUBARa.png) **尾聲:關於其他多重物件偵測模型** 前面提過,若要在影像中尋找可能的偵測對象,辦法之一是用移動窗格掃描畫面的不同位置並重複預測,但這樣會帶來可觀的運算負擔。不同的技術也被提出來解決這種問題。 著名的 YOLO (You Only Look Once) 和 SSD (Single Shot MultiBox Detector) 模型儘管運作原理不同,但都只要對模型輸入一次畫格,就能偵測多重物件了。YOLO 使用單一模型對整個畫面進行迴歸預測,而 SSD 則是先對畫面用 CNN 產生特徵。這些模型使用的訓練集已經事先標註好畫面中每個物體的所在位置,因此它們可以直接回報畫面中可能的物體位在何處。換言之,我們不必自己框出物體,這是模型自身就能做到的事。你甚至可以藉由標註自己的照片,來用這些模型做遷移學習 (transfer learning)。 ![](https://i.imgur.com/LvRtIW5.jpg) 有興趣的人可參閱《[Raspberry Pi 樹莓派:Python x AI 超應用聖經](https://pse.is/3w8gjx)》,裡面有使用 YOLO 以及 MobileNet-SSD 進行圖片/即時影像辨識的範例。MobileNet 由 Google 開發,而預訓練好的 MobileNet-SSD 同樣被轉為 TF Lite 版本,以利於在行動裝置或樹莓派上運行。 相關書籍: 《AutoML 自動化機器學習》→ https://pse.is/3vbac4 《Raspberry Pi 樹莓派》→ https://pse.is/3w8gjx

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password
    or
    Sign in via Facebook Sign in via X(Twitter) Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    By signing in, you agree to our terms of service.

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully