YH Hsu
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
      • Invitee
    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Versions and GitHub Sync Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
Invitee
Publish Note

Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

Your note will be visible on your profile and discoverable by anyone.
Your note is now live.
This note is visible on your profile and discoverable online.
Everyone on the web can find and read all notes of this public team.
See published notes
Unpublish note
Please check the box to agree to the Community Guidelines.
View profile
Engagement control
Commenting
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
  • Everyone
Suggest edit
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
Emoji Reply
Enable
Import from Dropbox Google Drive Gist Clipboard
   owned this note    owned this note      
Published Linked with GitHub
1
Subscribed
  • Any changes
    Be notified of any changes
  • Mention me
    Be notified of mention me
  • Unsubscribe
Subscribe
# [Explainable AI] Transformer Interpretability Beyond Attention Visualization。 Transformer可解釋性與視覺化 ###### tags: `Literature Reading` `XAI` `Visualization` `Interpretability` `Transformer` ### [AI / ML領域相關學習筆記入口頁面](https://hackmd.io/@YungHuiHsu/BySsb5dfp) --- ### ViT與Transformer相關筆記 - [[Transformer_CV] Vision Transformer(ViT)重點筆記](https://hackmd.io/tMw0oZM6T860zHJ2jkmLAA) - [[Transformer] Self-Attention與Transformer](https://hackmd.io/fmJx3K4ySAO-zA0GEr0Clw)筆記 - [[Self-supervised] Self-supervised Learning 與 Vision Transformer重點筆記與近期發展](https://hackmd.io/7t35ALztT56STzItxo3UiA) - [[Explainable AI] Transformer Interpretability Beyond Attention Visualization。Transformer可解釋性與視覺化](https://hackmd.io/SdKCrj2RTySHxLevJkIrZQ) - [[Transformer_CV] Masked Autoencoders(MAE)論文筆記](https://hackmd.io/lTqNcOmQQLiwzkAwVySh8Q) ### [Transformer Interpretability Beyond Attention Visualization](https://arxiv.org/abs/2012.09838) #### [官方code](https://github.com/hila-chefer/Transformer-Explainability) ## 核心概念 ![](https://hackmd.io/_uploads/ryGf4TFCq.png =500x) ![](https://hackmd.io/_uploads/BJ7dN6tRq.png =500x) - 模型表現 :::spoiler ![](https://hackmd.io/_uploads/S1KaSM_fq.png =700x) ::: ## 先備知識 ### Transformer架構與Self Attention :::spoiler #### 參考資料 - [Attention is All you Need](https://arxiv.org/abs/1706.03762) - [The Illustrated Transformer](https://jalammar.github.io/illustrated-transformer/) - [illustrated-self-attention-2d627e33b20a_Q(Query), K(Key), V(Value)動畫圖解](https://towardsdatascience.com/illustrated-self-attention-2d627e33b20a)(https://jalammar.github.io/illustrated-transformer/) #### Self-Attention - Scaled Dot-Product Attention 縮放後的點積注意力 - 點積: 反映兩個向量間的相似性 ![](https://i.imgur.com/i4yqJyW.png =300x) ![](https://i.imgur.com/XiVdAHa.png =200x) #### Multi-head Attention - 由多個Self-Attention模塊串接組成 - 提供模型彈性、多面向的注意力 - [The structure of Multi-Head Attention](https://www.tutorialexample.com/understand-multi-head-attention-in-deep-learning-deep-learning-tutorial/) ![](https://i.imgur.com/a4e9iLj.png =300x) #### Transformer模型的整體結構 - 模型架構 ![Transformer](https://i.imgur.com/ZMxXphs.png =300x) ::: ## Explainability in computer vision 在給定輸入圖像和CNN的情況下,有許多方法被建議用於生成表明局部相關性的熱圖。這些方法大多屬於兩類中的一類:梯度方法(Gradient based)和歸因(Attribution propagation)方法。 ### XAI - Gradient based methods #### [Intuitive understanding of gradients](https://docs.google.com/presentation/d/1vgZ-AVQOap6XE_Q25qMNQGHTZ5dMwR--i1aVmZisyZU/edit#slide=id.g1517cea904c_0_26) #### Gradient based methods :::spoiler - Gradients * input activations - GradCAM - class-specific - based onlyon the gradients of the deepest layers - low-spatial resolution ![](https://i.imgur.com/fLIu7Ix.png =600x) - [Grad-CAM++: Generalized Gradient-based Visual Explanations for Deep Convolutional Networks](https://www.researchgate.net/figure/An-overview-of-all-the-three-methods-CAM-Grad-CAM-GradCAM-with-their-respective_fig9_320727679) ::: ### XAI - Attribution propagation methods #### LRP(Layer-wise relevance propagation) 相關性逐層回傳 :::spoiler - 以遞歸的方式將網絡做出的決定分解為前面各層的貢獻。一直到網絡的輸入 - 根據Deep Taylor Decomposition(DTD),將相關性從預測類別反向傳播到輸入圖像。 - 每層相關性預測總和是固定的 ![](https://i.imgur.com/gcbxnuA.png =600x) ![](https://i.imgur.com/bw64YnA.png =600x) - [Explainable AI explained! | #6 Layerwise Relevance Propagation with MRI data](https://www.youtube.com/watch?v=PDRewtcqmaI) ::: #### 各神經元相關分數(貢獻度)計算方式 :::spoiler - [[TA 補充課] More about Explainable AI 15min](https://www.youtube.com/watch?v=LsdiOt0wiWM) - 看整體影響力的占比(global),而非權重敏感度(local) ![](https://i.imgur.com/jcz0J0c.png =300x) - 計算各神經元相關分數(貢獻度) - contribution = activation(z) x weight ![](https://i.imgur.com/6AWWMte.png =300x) ![](https://i.imgur.com/rsG98Cy.png =300x) - 相關分數重新分配給下一層(outputs > inputs) - 遞迴反向傳播(BackPropogation)直到輸入 ![](https://i.imgur.com/8MgdXHc.png) ::: ##### LRP其他參考資料 :::spoiler - [Interactive LRP Demos](http://www.heatmapping.org/) - [可解釋 AI (XAI) 系列 — 03 基於傳播的方法 (Propagation-Based): Layer-Wise Relevance Propagation](https://medium.com/ai-academy-taiwan/%E5%8F%AF%E8%A7%A3%E9%87%8B-ai-xai-%E7%B3%BB%E5%88%97-03-%E5%9F%BA%E6%96%BC%E5%82%B3%E6%92%AD%E7%9A%84%E6%96%B9%E6%B3%95-propagation-based-layer-wise-relevance-propagation-1b79ce96042d) - [【相关性回传】On Pixel-Wise Explanations for Non-Linear Classifier Decisions](https://zhuanlan.zhihu.com/p/427403629) ::: --- ## 論文重點摘要 ##### 參考[Intro to Transformers and Transformer Explainability(作者演講)](https://www.youtube.com/watch?v=a0O_QhE9XFM)(45min開始) ### 1. 從注意力矩陣(attention matrix)切入 - 每個注意力矩陣由代表整體訊息、最前端的cls_token與其後的token(patch)組成,反映token間的相關程度 - Attention matrix : $A = softmax(\frac{Q·K^T}{\sqrt{d_k}})$ - 每個token間的注意力(相似度)計算來自於Q與K的點積(dot product) $Q·K^T$ - 是否也可以藉由檢視每個token(patch)與cls_token(類別)之間的相關程度,來評估每個token(patch)對於解釋被分類為該類別的重要性(影響力)呢? - 注意力矩陣中的值是相似度分數(similarity score )但也可以轉換成相關性分數(Relevance score) ![](https://i.imgur.com/SmNohzN.png =400x) ### 2. 多個Attention map如何整合 - Attention map Need to aggregate accross - Attention head - Attention layer #### Attention rollout(Abnar et al., 2000) - Head Aggregate by average - Layer Aggregate by matrix multiplucation ![](https://i.imgur.com/RZjXolV.png =500x) #### Transformer Interpretability Beyond Attention Visualization (Chefer et al., 2001) ##### Attention head的匯聚方式同時採用注意力矩陣的相關度圖(Relevance maps)與梯度(Gradients) ###### ==Relevance maps(LRP)== :::spoiler - Layer-Wise Relevance Propagation (LRP)公式 ![](https://i.imgur.com/cSAjGH7.png =150x) - R 是貢獻程度(相關性分數),j 是淺層神經元,k 是深層神經元,$z_{jk}$ 則代表神經元 k 對 j 的重要程度,也就是說較淺層的 $R_j$ 可以由較深層的 $R_k$ 反向傳播回去,並且可以知道這個 $R_j$ 是由多少 $R_k$ 所組成 - 各層的總值相等 - $\sum_{j}R_{j}\:=\:\sum_{k}R_{k}$ - LRP傳遞的過程 ![](https://i.imgur.com/H2Oeo3Z.png =400x) - 從輸出層傳播到輸入層 ![](https://i.imgur.com/r9ed0KV.png =400x) - 通過定義每一層對上一層結果的貢獻和相關性,從最後的輸出結果逐層推導至原圖中的像素層面 - 每一層的相關分總合均為1,分配給所有權重 - relevance map determines how much each similarity scores influences the output --- ::: - 不直接使用Attention map中的數值而是使用相關分數(LRP Relevance values) :::spoiler Attention map只有$Q·K^T$的相似度值,然而整個Transformer模塊中,Attention map還需要跟其他的神經網路層(例如還會乘上Linear Layer)活化,因此單純使用Attention map的相似度數值無法充分反映反應特定token(patch)對類別的影響力。為此,此研究提議計算相關分數(LRP Relevance values)作為替代 ::: ###### ==Gradients== - classs specific signal - 反向傳播時,權重對特定類別變化的反映程度 ###### Weighted average of the heads $\overline{A}^{(b)} = I + \mathbb{E_h}(\bigtriangledown A^{(b)} \bigodot R^{(n_b)})^{+}\quad\quad(13)$ $C = \overline{A}^{(1)} · \overline{A}^{(2)} · . . . · \overline{A}^{(B)} \quad \quad(14)$ ![](https://i.imgur.com/7X50wLx.png =600x) :::spoiler - $C ∈ \mathbb{R}^{s×s}$ : 權重過的注意力相關圖(the weighted attention relevance) - s : number of tokens - $(\bigtriangledown A \bigodot R)^+$ - 以梯度做為權重,與LRP做逐元素相乘 - 在計算梯度權重時,僅考慮正值 - 正梯度數值越大,代表對模型判定為該類別有積極的反應 - $\mathbb{E_h}$ is the mean across the “heads” dimension - $I$ : 由於Transformer模組中的跳接設計,在計算注意力相關性圖時,加上單位矩陣來避免每個token(patch)本身的特徵消失 - (To account for the skip connections in the Transformer block, we add the identity matrix to avoid self inhibition for each token.) - 為了改進rollout直接平均head的做法,此研究利用梯度大小會對類別訊號反應的特性,將相關圖(LRP)逐元素乘上梯度,亦即取得與類別相關的權重,而得到權重過的平均Attention head - 使用矩陣相乘,將各層權重後的Attention map進行訊息匯聚(Layer Aggregate by matrix multiplucation)_見公式(14) ::: --- ## 實驗結果評估 ### 質性評估(Qualitative evaluation) :::spoiler - 下圖為影像中含有多類別的圖像 - 除了GradCam與本研究方法外,其餘方法對類別反映沒有專一性 - (GradCam其實表現得滿厲害的) ![](https://hackmd.io/_uploads/S1KaSM_fq.png =700x) ::: ### 擾動測試(Perturbation tests) #### 正負擾動說明 :::spoiler - 正、負擾動試驗採用兩階段設置。 1. 一個預先訓練好的模型用來提取ImageNet驗證集的視覺化訊息。 2. 逐漸掩蓋輸入圖像的像素,並測量模型的平均top-1準確度。 - 在正向擾動中,像素依照相關性分數被從最高掩蓋到最低 - 預期模型預測表現急劇下降,這表明被掩蓋的像素對分類得分很重要 - 在負向擾動中,像素依照相關性分數被從最低到最高 - 當移除的像素跟預測類別無關時,模型的預測表現會緩慢下降,也就是相關性分數確實區分了重要與不重要的圖塊區域、具有良好的解釋力 - 在這兩種情況下,我們都測量了"曲線下的面積"(area-under-the-curve, AUC),對10%-90%的像素進行調整。 ::: - AUC(area-under-the-curve)指標說明 :::spoiler - ROC = True Positive Rate / False Positive Rate - 效益(真陽性率)與成本(偽陽性率)之間的相對關係 - ROC的曲線下面積,ROC曲線越接近左上角代表模型表現(分類預測能力)越好 ![](https://i.imgur.com/PTT2aOp.png =300x) [An example of ROC curves with good (AUC = 0.9) and satisfactory (AUC = 0.65) parameters of specificity and sensitivity](https://www.researchgate.net/figure/An-example-of-ROC-curves-with-good-AUC-09-and-satisfactory-AUC-065-parameters_fig2_276079439) - ROC(Receiver Operating Characteristic) - Close the curve to the upper left sde, better the model performance is. ![](https://i.imgur.com/PYQNOYB.png =300x) [Simply ROC Curve](https://www.kaggle.com/getting-started/185435) ::: ##### #### 擾動測試評估結果 :::spoiler ![](https://i.imgur.com/uwSUc3L.png =800x) - 正擾動-數值越低越好 - 負擾動-數值越高越好 ::: ### 分割任務(Segmentation) The segmentation metrics (pixel-accuracy,mAP, and mIoU) :::spoiler ![](https://i.imgur.com/66JOYXy.png) ::: - mean Average Precision(mAP)指標說明 :::spoiler - 平均精確率(Average Precision) - 精確率-召回率曲線(precision-recall curve)下的面積 - [WHAT IS THE DIFFERENCE BETWEEN PRECISION-RECALL CURVE VS ROC-AUC CURVE?](https://ashutoshtripathi.com/2022/01/09/what-is-the-difference-between-precision-recall-curve-vs-roc-auc-curve/) - 左圖:ROC-AUC CURVE = TP/FP - 右圖:PRECISION-RECALL CURVE = Precision/Recall ![](https://i.imgur.com/tXdgGyi.png =1200x) - [Precision and Recall Made Simple](https://towardsdatascience.com/precision-and-recall-made-simple-afb5e098970f) - Precision(PPV) = 真實且預測陽性樣本(TP) / 所有預測預測陽性(TP+FP) - Recall(Sensitivity) = 真實且預測陽性樣本(TP)/ 所有真實陽性樣本(TP+FN) ![](https://i.imgur.com/4Fu7NDb.png =200x) ![](https://i.imgur.com/Tt33noD.png =300x) ![](https://i.imgur.com/ix50gpB.png =400x) - mAP (mean average precision) 即多個AP的平均值 ::: - Mean Intersection over Union(mIoU))指標說明 :::spoiler - IoU = overlap / union - ![](https://i.imgur.com/FffBUPY.png =300x) ::: ### 語言理解(Language reasoning) ### 消融研究(Ablation study) #### 模型設定 :::spoiler - Algorithm for model - Transformer Interpretability Beyond Attention Visualization ![](https://i.imgur.com/GemCTqx.png =300x) - Attention Rollout ![](https://i.imgur.com/7piLFZ1.png =300x) - Setting for models * (i) Ours w/o ∇A(b), which modifies Eq. 13 s.t. we use A(b) instead of ∇A(b) * 不使用梯度,改用Attention map取代 * (ii) ∇A(1)R(n1), i.e. disregarding rollout in Eq. 14, and using our method only on block 1, which is the block closest to the output * 只在最深層(最接近輸出層的Transformer模塊)使用 $(\Delta A \bigodot R)^+$的計算方式 * (iii) ∇A(B−1)R(nB−1 ) which similar to (ii), only for block B − 1 which is closer to the input * 只在最淺層(最接近輸入層的Transformer模塊)使用 $(\Delta A \bigodot R)^+$的計算方式 ::: #### 結果 :::spoiler ![](https://i.imgur.com/oiQSUhL.png) - 只在深層(b=1,output前一層)採用結合 $(\Delta A^{(1)} \bigodot R^{(n_1)})^+$ 取代單純Attention Rollout(只使用Attention map)的計算方式,取得與實驗方法中最接近的結果,而預測表現只有輕微下降 - 這一層也是在raw-attention, partial LRP,與GradCAM等方法中使用的位置 - 顯示接近輸出層含有的資訊遠較輸入層豐富 - 消融實驗結果顯示,本研究方法的效能增進主要來自於相關性(Relevance)分數結合Attention map梯度的計算方式 ::: --- ## eBird MAE模型測試 ### 方法 :::spoiler - 參考官方[colab](https://colab.research.google.com/github/hila-chefer/Transformer-Explainability/blob/main/Transformer_explainability.ipynb) - 先載入官方實作好的ViT模型,再載入自己訓練的權重參數,不需要修改任何模型架構 ::: ### 影像質性測試 :::spoiler - 模型關注重點: - 1.腰部花紋或2.臉部花紋。 - 比較像人類第一眼看到這幾張海鳥照片直覺會看的區域(特別是腰部明顯對比的黑白花紋) ![](https://i.imgur.com/QpwLzTv.png =500x) ![](https://i.imgur.com/7jXA0Fz.png =500x) - 人類專家判別重點 - 1. 腳是否突出尾部 2. 翅型比例 3.體型大小 ![](https://i.imgur.com/ehRHshT.jpg =600x) ::: ### 討論 :::spoiler - 館碩:這個也回到目的討論,如果我們上週看的兩者在飛行時外觀細到要看腳有沒有突出 (這跟體型大小或身型比例等都會受到拍照角度影響) 那就不是我們現在的模型要解決或能夠的問題。如果以全鳥類來看,某些外觀極相似,可能都是演化尺度的問題,我們模型分不出類別沒關係,但可以事後檢驗這些相似是巧合還是被環境篩選出的祖徵或是趨同演化之類 ::: ## 參考資料 #### Transformer Interpretability Beyond Attention Visualization - [7.Transformer Interpretability Beyond Attention Visualization(飞霜 )](https://zhuanlan.zhihu.com/p/453592514?fbclid=IwAR0DxrUtHDgOc3CfLTqnmj_Gid2FWbSvxWr8JTWtAiksJHftNd62cotSTJg) - [【Transformer 可视化】Transformer Interpretability Beyond Attention Visualization](https://zhuanlan.zhihu.com/p/427677129) - [FAIR提出:注意力可视化之外的Transformer可解释性](https://zhuanlan.zhihu.com/p/338156132)

Import from clipboard

Paste your markdown or webpage here...

Advanced permission required

Your current role can only read. Ask the system administrator to acquire write and comment permission.

This team is disabled

Sorry, this team is disabled. You can't edit this note.

This note is locked

Sorry, only owner can edit this note.

Reach the limit

Sorry, you've reached the max length this note can be.
Please reduce the content or divide it to more notes, thank you!

Import from Gist

Import from Snippet

or

Export to Snippet

Are you sure?

Do you really want to delete this note?
All users will lose their connection.

Create a note from template

Create a note from template

Oops...
This template has been removed or transferred.
Upgrade
All
  • All
  • Team
No template.

Create a template

Upgrade

Delete template

Do you really want to delete this template?
Turn this template into a regular note and keep its content, versions, and comments.

This page need refresh

You have an incompatible client version.
Refresh to update.
New version available!
See releases notes here
Refresh to enjoy new features.
Your user state has changed.
Refresh to load new user state.

Sign in

Forgot password

or

By clicking below, you agree to our terms of service.

Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
Wallet ( )
Connect another wallet

New to HackMD? Sign up

Help

  • English
  • 中文
  • Français
  • Deutsch
  • 日本語
  • Español
  • Català
  • Ελληνικά
  • Português
  • italiano
  • Türkçe
  • Русский
  • Nederlands
  • hrvatski jezik
  • język polski
  • Українська
  • हिन्दी
  • svenska
  • Esperanto
  • dansk

Documents

Help & Tutorial

How to use Book mode

Slide Example

API Docs

Edit in VSCode

Install browser extension

Contacts

Feedback

Discord

Send us email

Resources

Releases

Pricing

Blog

Policy

Terms

Privacy

Cheatsheet

Syntax Example Reference
# Header Header 基本排版
- Unordered List
  • Unordered List
1. Ordered List
  1. Ordered List
- [ ] Todo List
  • Todo List
> Blockquote
Blockquote
**Bold font** Bold font
*Italics font* Italics font
~~Strikethrough~~ Strikethrough
19^th^ 19th
H~2~O H2O
++Inserted text++ Inserted text
==Marked text== Marked text
[link text](https:// "title") Link
![image alt](https:// "title") Image
`Code` Code 在筆記中貼入程式碼
```javascript
var i = 0;
```
var i = 0;
:smile: :smile: Emoji list
{%youtube youtube_id %} Externals
$L^aT_eX$ LaTeX
:::info
This is a alert area.
:::

This is a alert area.

Versions and GitHub Sync
Get Full History Access

  • Edit version name
  • Delete

revision author avatar     named on  

More Less

Note content is identical to the latest version.
Compare
    Choose a version
    No search result
    Version not found
Sign in to link this note to GitHub
Learn more
This note is not linked with GitHub
 

Feedback

Submission failed, please try again

Thanks for your support.

On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

Please give us some advice and help us improve HackMD.

 

Thanks for your feedback

Remove version name

Do you want to remove this version name and description?

Transfer ownership

Transfer to
    Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

      Link with GitHub

      Please authorize HackMD on GitHub
      • Please sign in to GitHub and install the HackMD app on your GitHub repo.
      • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
      Learn more  Sign in to GitHub

      Push the note to GitHub Push to GitHub Pull a file from GitHub

        Authorize again
       

      Choose which file to push to

      Select repo
      Refresh Authorize more repos
      Select branch
      Select file
      Select branch
      Choose version(s) to push
      • Save a new version and push
      • Choose from existing versions
      Include title and tags
      Available push count

      Pull from GitHub

       
      File from GitHub
      File from HackMD

      GitHub Link Settings

      File linked

      Linked by
      File path
      Last synced branch
      Available push count

      Danger Zone

      Unlink
      You will no longer receive notification when GitHub file changes after unlink.

      Syncing

      Push failed

      Push successfully