Sam Leone
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Graph Fourier MMD ICLR Review Responses ## Response to All Reviewers Thank you for your thoughtful evaluation. We would first like to emphasize that this method is indeed novel with far-reaching applications. This is one of the few papers on graph distribution distances; in general, other approaches consider some form of Earth Mover's Distance. While the main application area we explored was in the data manifold setting, in which there is a notion of EMD, other distances could not be easily abstracted to irregular structures such as trees & histograms. Graph Fourier MMD allows for the full generality of a graph, with only the restriction of nonnegative affinities. There is also value in not only a closed form solution, but one which is simple, fast to compute, and lends itself to ease of analysis. Graph Fourier MMD corresponds to a quadratic form with the pseudoinverse, and thus would give a convenient statistic for the purposes of say, a hypothesis test between distributions. Through the use of Chebyshev polynomials, Graph Fourier MMD can be calculated *very* rapidly and relatively accurately. We have included an elaboration of experiment 4.1 in which we demonstrate the high performance of low-order polynomials. TODO: NOVEL APPLICATIONS TO SINGLE CELL GENOMICS ### Summary of Changes: # Response to R1 (3iNi): Thank you for your careful reading of our manuscript. It seems as though the main concerns lie in advantages over other methods and the settings in which GFMMD can be used. We will address each concern, item by item: > What is the advantage of the proposed distance (GFMMD) over recent proposed distance for distributions on a graph: e.g., Diffusion EMD, Sobolev transport The advantages of GFMMD over Diffusion EMD in are shown in Section 4.1, quantitatively in Table 1 and qualitatively in Figure 2. Specifically, Table 1 shows GFMMD is more accurate than Diffusion EMD (0.605 vs. 0.584 spearman-$\rho$) and is also faster than DiffusionEMD when approximated with Chebyshev polynomials (0.654 vs. 2.171 seconds). From a theoretical perspective, DiffusionEMD approximates a distance that is *equivalent* to the EMD, that is there exist two positive constants $c,C$ such that $c \cdot W(\mu,\nu)<DiffEMD(\mu,\nu)<C\cdot W(\mu,\nu)$, however there is no analysis on these bounds, and hence on the accuracy of the approximation. Our method GFMMD does not rely on a notion of equivalence, we find a closed form for this MMD, and are also able to identify the feature map. GFMMD and DiffusionEMD solve a different problem than is solved by Sobolev transport. Sobolev transport is designed for the setting in which edges between vertices correspond to *costs*, while GFMMD assumes *affinities* between vertices. This difference is highlighted in the introduction of the Sobolev transport paper [Le et al. 2022]. "the edge weight in the graph [...] is a cost to move from one node to the other node of that edge [...] rather than an affinity between these edge nodes of the graph used in diffusion earth mover's distance." > For the formulation of GFMMD in Theorem 2, it is a Mahalanobis distance (with matrix L instead of identity matrix as in L2 distance) over the full graph. When input distributions P, Q are very sparse on graph, the proposed GFMMD is still computed on the size of the entire graph, it may affect its advantage on the large-scale graph. This is a valid point, and minimizing computation for sparse distributions would be ideal. However, it is in general necessary to consider the whole dataset when calculating *manifold* distances between distributions. This is unlike, say, EMD, but it is an inherent trait of Diffusion EMD & Sobolev transport as well. Of course, you could isolate your graph to the combined support of two sparse signals and proceed with computation, but in doing so, you are disregarding the over-arching structure of the manifold. So while yes, the computation does consider the whole dataset, the *problem* does as well. Also, while this is a concern, Graph Fourier MMD is particularly advantageous in the case of many distributions on graphs, since the computation can be parallelized. For many signals and a relatively small graph, pairwise distances can be calculated via a simple matrix multiplication and taking pairwise Euclidean distances. This setting is reflected in our application to single cell data, where there are 20,000 signals to compare on the same graph. > In Section 2.1., why the author call graph G a distance graph when the edges are computed by affinity (similarity between corresponding two nodes). Is there any difference about the meaning when one use (i) kernel or (ii) distance to build the graph? This was unclear. There is a distance graph computed initially, which is then transformed into an affinity graph, which is the graph we operate on. We have changed the manuscript to define $G$ as an affinity graph. [[[[CHANGED IN THE PAPER????? -- Alex]]]] [[[!!YES!! --GH]]] > For arbitrary graph, the weight on the graph may be negative, it is unclear why the identity of f^TLf (in Section 2.1) can imply the positive semidefiniteness for L Thank you for pointing that out, this was also unclear. GFMMD requires nonnegative edge weights. > Could the author elaborate the relaxed constraint which is more robust to noise? (In section 3.1) This claim should have been cited. However, the idea is that the 1-Lipschitz condition is more sensitive to perturbations in the data. What would have been more accurate to say is that Graph Fourier MMD and its witness function is guaranteed to vary smoothly with P-Q, while the same may not necessary be true of 1-Wasserstein distance. [[[[CHANGED IN THE PAPER????? -- Alex]]]] > In the proof of Lemma 4, GFMMD is positive definite? (or negative definite?) or simply nonnegative? It is a proper distance between probability distributions, and is therefore nonnegative. It is positive, unless P = Q, in which case the distance is 0. [[[[CHANGED IN THE PAPER????? -- Alex]]]] [[[Changed to nonnegative -- GH]]] > In Algorithm 1, the role of Chebyshev polynomial approximation is important to reduce the complexity of the proposed method. It is better if the authors elaborate it with more details and discuss the trade-off about the quality of approximation with the gain from computation. We have included a larger version of experiment 4.1 to illustrate that that the quality of approximation remains quite high with relatively low order (and thus fast-to-compute) Chebyshev polynomials. [[[Add something about fiedler value? -- alex]]] > In Algorithm 1, what is the complexity to build the kNN graph over X with O(nlogn) edges? How does one to choose K to guarantee that we obtain a graph with O(nlogn) edges? Is the built graph connected? The complexity to build a graph with $O(n \log n)$ edges is $O(n \log^2 n)$ assuming an average neighbor query time of $O(\log n)$. Choosing $k$ is a difficult and application specific problem. We note that in typical single-cell applications $k$ is chosen as $k=15$ with suggested values in the range of $[5,200]$ in UMAP and $k=5$ as default in PHATE. > For experiment 4.1, the authors should elaborate what is the ground truth (exact EMD)? what is its cost metric? The ground truth in experiment 4.1 is true geodesic distances on the swiss roll between the centers around which our distributions were centered. This would converge to the exact EMD with a (true and unknown) manifold geodesic ground cost between the underlying densities (not the finite sample approximations we use). This distance is known by construction of the data generating process. > Why the correlation with the Exact EMD is important over several baselines? I am confused that why the proposed Graph Fourier MMD is more correlated with Exact EMD? (as far as I understand MMD and EMD are two different instances of IPM!) Correlation to the Exact EMD is important because it is an interpretable and frequently used metric in this application setting. One of our goals is to show that on problems of interest GFMMD correlates to this an interpretable metric (Exact EMD) while being substantially faster on large datasets. GFMMD also has an interpretation as an MMD. EMD and GFMMD are different instances of IPMs, therefore there are situations where they do not correlate well. We showed that in single-cell applications, they do correlate well, and GFMMD is substatantially faster. [[[**TODO OLD To Remove** We will reflect the writing to make this more clear. That is, the experiment is, at a high level, sample centers $x_1, x_2, \ldots, x_{100}$ and calculate geodesic distances between them. Then, we generate a point cloud around each center from a Gaussian distribution, build a graph from all the resulting points, and calculate distributions between the corresponding clouds. We then compare distribution distances between each method to the manifold distances between centers, and show Graph Fourier MMD is highest-performing.]]] > For experiment 4.2, the authors should compare the proposed approach with other baseline distances for distributions on a graph, besides the Wilcoxon rank sum differential expression. We have added a comparison to DiffusionEMD in the manuscript. # Response to R2 (SH8k) We thank the reviewer for a thoughtful review, particularly for noting the novelty of the method for comparing multivariate distributions. The main misunderstanding seems to be in the extent to which GFMMD can be applied outside of the KNN-Graph setting. We will address these concerns, item by item: > Some points shoud be better stated: the initial result in 3.1 assumes a distance function d(a,b) between all pairs of nodes yet the author suggest just before Def 4 and again afterwards to use instead an affinity graph between nodes instead of 1/d2. Is this affinity graph truncated or sparsified) in some way ? Or is it intended to gave a weighted full graph ? The idea of this was that $1/d(a,b)^2$ is itself an affinity, and an interpretable one from an OT perspective. However, GFMMD is defined with the constraint $f ^ T\mathbf{L}f < T$, this is valid for any affinity measure between vertices. Regardless of the chosen affinity, the graph may be sparsified to speed up computation. We modified this paragraph to clarify that we are not assuming any form for the affinities. [[[Someone else review the changes --GH]]] > Also: Algorithm 1 takes X as an input and never is considered the more general (and interesting) situation where X and G which is not a k-NN graphs over X. Would the method be amenable to such a more general situation ? And, if not, is the performance of the method only a conequence of the approximation of some manifold where the data points lie thanks to the k-NN graph ? Could topologies more varied than ones derived from data on swiss roll be considered ? GFMMD extends to arbitrary graphs, with the only restriction being nonnegative edge weights. This could encompass abstract metric spaces, in which the chosen distance metric can be used to calculate pairwise distances and thus affinities. But we could also do away with the assumption of our graph lying within a metric space, and begin with the affinities themselves. Then Algorithm 1 is the same, except you assume W is already calculated. It would be accurate to say that the quality of results rely on the quality of the graph approximation. However, KNN is a widely used and fast method for graph sparsification in Euclidean space, with empirical reliability. TODO: Unbalanced DEMD (Alex) > Can you challenge my impression that, in 4.1, the works merely rely on the possibility to approximate the structure (manifolds) thanks to k-NN ? If that so, what is actually the difference as compared to Laplacian embedding methods ? In section 4.1, we indeed assume the data lies on a manifold and choose to sparsify the resulting graph thanks to KNN. This is common practice for working with data manifolds in geometric methods. This is, however, a choice we made for the experiment, and not one which is critical to the function of GFMMD. In Laplacian embedding methods, nodes are being encoded, while we are embedding signals on those nodes. In Theorem 7, this difference is made explicit. The embedding of dirac delta functions to Laplacian embedding methods. > I don't understand why most of A.1 is not in the main text, while discussion about disconnected graphs (half or page 5, including Theorem 5 and the corollary) could safely be postponed to appendix, as there is almost no practical usage of working on disconnected graphs without preprocessing by connected components. Thank you for the helpful suggestion. We reorganized the manuscript accordingly. [[[TODO MAKE SURE WE DO THIS]]] > Some references are missing: to Hall's spectral drawing ; to references about Chebyshev polynomial approximation for graph filters ; in 4.1 when listing the various methods for comparisons We added the appropriate references. > MMD should be "translated" in the introduction Thank you. We made the change. > I am not confident that the lifting approach described at the beginning of 4 will not have major impact. Has this point been studied more ? Lifting is common practice. TODO: SMITA [[[TODO Comment: I think the lifting approach actually does have a major impact... I don't think we can do this, we should discuss further --Alex]]] > for the grid graph and Fig 1, how are chosen the diffusion times ? Thank you for pointing this out. t = 16 is the diffusion time. We will make note of this in the revision. > The aspects about Gene Locality is not clear for me (but maybe it's because it's a field that I know less). To clarify, the purpose of this experiment was to use distance to the uniform distribution over the cells, in the context of genes, as a measure of how "restricted" a particular gene is to a region of cells. That is, we look for genes which are expressed in distinctive neighborhoods of cells. This is useful for detection of gene modules. # Response to R3 We thank the reviewer for taking the time to review the paper carefully and point out areas in which it can be improved. The reviewer's main concern seem to lie in lack of literature review / comparison to comparable methods, and whether the care around negative signals is necessary. First, we will argue that GFMMD is indeed different from other embedding methods & distances: > On the one hand, all the literature on spectral embedding methods, or more generally on harmonic analysis on graphs, is almost not mentioned in the article. However, this is exactly what is proposed by the authors to compare signals on graphs seen as probability distributions. Indeed Theorem 2. simply tells us that the chosen metric corresponds exactly to computing the Euclidean distance between the spectral embeddings of the signals. From a shape point of view, many works have already addressed the issue (e.g. [1, 2]) and others have tried to generalize these approaches for the case of signals on different graphs e.g. [3]. In this context, lernel embedding of signals based on the Laplacian is far from being new. From an OT persepective there is also a lot of litterature on the exact same problem that is not mentionned e.g. [4-6] (even for signals on different graphs e.g. [7-10]). We moved the related work section in the main body of the text, and [[TODO add spectral embedding methods]]. In [1,2], although they use spectral properties, the task is very different since they compare shapes represented as graphs. Similarly, the authors in [3] are interested in graph representation and they define a general distance between *nodes* on a graph. Whereas here, we focus on distances between *signals* on a single graph. GFMMD is not defined on the same type of graphs as [4,5], they are designed for the setting in which edges between vertices correspond to *costs*, while GFMMD assumes *affinities* between vertices. This difference is highlighted in the introduction of the Sobolev transport paper [Le et al. 2022]. "the edge weight in the graph [...] is a cost to move from one node to the other node of that edge [...] rather than an affinity between these edge nodes of the graph used in diffusion earth mover's distance." Similarly, [6] is only defined on trees. TODO: [[Not sure what to say about on [4] -- GH]] TODO: Alex / Smita? (I could try, but I am probably much less well versed on the relevant literature) > I also find that talking about Wasserstein distance to define the GFMMD metric is rather clumsy, even a bit forced. While we agree that the relationship with 1-Wasserstein distance is not necessary to construct the definition of GFMMD, we sought to frame the construction of GFMMD through the lens of OT, rather than just presenting an arbitrary MMD. > More importantly, there is a real confusion between "signals on graphs" and "probability distributions on graphs". Indeed, the way it is written, GFMMD(P,Q)=|L−12(P−Q)| implies that the method cannot take into account signals on the graph. Indeed "P is a probability distribution on the vertices/nodes": thus it is simply a histogram (say the graph have two nodes 1,2 then P is for example (1/2,1/2)). So |L−12(P−Q)|2 is simply the ℓ2 distance between the histograms reweighted by the Laplacian, and there is no signal here P and Q are indeed proper probability distributions. In the discrete case, they represent the distributions defined by the histograms. That is, they can be thought of as the distributions corresponding to random variables that take values on the vertices of G. While one could simply plug P and Q into the formula |L^{-1/2}(P-Q)|, in the case of negative distributions, this will no longer be equal to the defition of GFMMD as provided. This comes from a key assumption in Theorems 2 and 3 to provide the closed form solution. We assume that, without loss of generality, our witness function is orthogonal to the kernel of L, since adding any vector in the null space would not affect the value E_P(f) - E_Q(f), a fact which hinges on P and Q summing to the same value within each connected component. This may not be true of negative signals. To be clear, we have a correspondence between nonnegative signals on graphs and probability distributions (as you say, histograms) on the vertices, since we can normalize signals. The issue of negative signals is more complicated, so we propose lifting the signal so that it is nonnegative, and then normalizing accordingly. It is assumed that the information we are interested in are the set of relative differences, and thus such transformations are acceptable. > The first paragraph of this part is indeed quite surprising: I have the impression that the authors confuse "negative signals P and Q" with "negative measures/distributions P and Q". In the first case there is no problem to consider the MMD/Wass between two probability distributions associated to signals which can take negative values (one can easily compute the MMD or Wass distance between 12δ−1+12δ−2 and 12δ1+12δ12). In this case, why mention the fact that we have to normalize? In the second case, P and Q are indeed no longer probability distributions and the framework/formalism must be changed. There is no issue with the formalism. The proposed approach is to transform signals into probability distributions, and then take the usual GFMMD between them. Negative signals do indeed pose a challenge, since they cannot be normalized via rescaling into classic probability distributions. Wasserstein distances are traditionally defined as distances between probability distributions over a metric space. What you are suggesting is the Wasserstein distance between the two points in R^n 12δ−1+12δ−2 and 12δ1+12δ12. But the underlying distributions are nonnegative - they would be dirac distributions. > I have trouble seeing the point of the grid graph experiment: what is exaclty this "witness function"? The distance D is, I guess the GFMMD, right? What this example shows is that the GFMMD increases when the 2 signal moves away from the 1 signal on the graph, right? Thank you for pointing out this ambiguity. The witness function is the function $f$ which maximizes $\mathbb{E}_P(f) - \mathbb{E}_Q(f)$. While technically, $f$ can differ by anything in the null space of $\mathbf{L}$, we choose the representative which has component 0 in the null space. Indeed, as you say, D is the GFMMD. This is a typo. That is the desired conclusion of the experiment. > I am not really convinced by the experiment 4.1 which tries to show that GFMMD is a good measure of similarity between distributions on the graph. The example is quite artificial and the results are really noisy, it is hard to understand why one method is really better than the others (for example Diffusion EMD vs GFMMD) The main conclusion of the experiment is to demonstrate that GFMMD is a more accurate predictor of manifold distances than the other methods, since the distances between distributions align most closely with the corresponding geodesic distances between centers. The advantage over Diffusion EMD is higher accuracy achieved in much less time. Only kernel MMD has comparable accuracy, but even this has a much higher runtime. The non-manifold distances like EMD and Sinkhorn with Euclidean ground distance perform far worse in terms of accuracy. > Overall I find that the Figure 1, 2 and 5 are really hard to read. I think it is important to make the titles and figures larger. The article about the package "python optimal transport" should be quoted instead of just mentioning it in a sentence. Similarly references Chebyshev polynomials applied to filter approximation on graphs are missing (e.g. [11]). The "Hall's Spectral Graph Drawing of the Graph G in k-dimensions" is not properly defined in the article and there are no references, so we don't really understand what Theorem 7 is about. I think the writing of the article could be improved. In particular, the first three pages contain a lot of definitions that I think are not really useful and could be removed to discuss many more important points like the links with the different spectral approaches or to discuss other transport approaches for signals on graphs. Thank you for the helpful suggestions. We added a citation for the Hall's spectral graph drawing theorem. We also moved the related work to the main body of the text and added discussion on other spectral or OT approaches. [[[TODO: make sure we do the changes.]]] # Response to R4 We thank the reviewer for the kind words, particular for noting the benefits GFMMD can achieve over other methods. We will address the individual concerns now: > The experimental examples are not very significant. This reflects the fact or question how this theory can benefit in wide learning scenarios. We do feel that the experimental results are significant. We demonstrate high performance of GFMMD over other existing methods to detect manifold structure. We then demonstrate its applicability in the analysis of single cell data in the identification of gene clusters. > Could the authors elaborate the actually meaning of T in Definition 4. Why is it important? T is a user parameter. It is not of great importance, as the distances between distributions depend only on its square root. Thus, altering it would not affect relative distances. For most applications, T could be set to be 1. However, other values of T enjoy nice properties. For instance, if we let T = 2*|E|, twice the number of edges, and choose the affinity 1/d(a,b)^2, Graph Fourier MMD provides an upper bound on the 1-Wasserstein distance (the proof aligns closely with the analysis in 3.1) > It is nice to have defined a new feature map by L^{-1/2}. How these mapped feature behavor would be interesting in using these them. Indeed, more extensive investigation of the feature map would be interesting. It is worth noting, however, that the first plot in figure 3.a. is a visualization of this feature space (using the first two principal components, since of course, we cannot visualize all the dimensions). The clusters shown in this figure were performed by k-means clustering in the full feature space, demonstrating that the structure in this space can be used to partition genes, without the use of other more sophisticated dimensionality reduction & clustering algorithms.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully