Clément Bonet
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Review - USW - ICML # Area chair and SAC We are extremely grateful for your time and dedication towards handing our paper. We are sorry to contact you directly. We know that your role is difficult and that this year ICML policy about the choice of reviewers made things complicated. We would like to politely flag Reviewers t1Dj, 48S7 and B66R who, in our opinion, show at best potential misunderstandings or at worst, bad faith. In addition of not having engaged with us during the rebuttal period, the reviewers made incorrect claims or judgment of our work, such as - 1. 'The contributions might look incremental' (R t1Dj). As detailed in the paper and our rebuttal, our construction is completey original and while we are exploring a computationally efficient method of a known problem (Unbalanced Optimal Transport) with existing theoretic analysis, our work requires the derivation of novel and non-trivial dual formulations which are absolutely needed to solve the problems with FW-types algorithms, and is completed with new properties (e.g. sample complexity, weak convergence metrization). - 2. ‘There is no guarantee on the FW convergence’ (R B66R). As detailed in the paper and the rebuttal, our setting verifies the assumptions of Theorem 8 in [1], which justifies the convergence of our FW algorithm. - 3. 'Experiments are weak (R 48S7)'. We find this comment to be quite unjustified, especially as our focus is on optimal transport methods. Our aim is to provide a fair comparison between SUOT/USOT and classical OT methods such as SOT or OT [2,3] and our experiments give reasonable indications that an unbalanced version of sliced Wasserstein is most of the time a good choice for practitionners. While we acknowledge that some experiments are recurrent in OT papers (such as color transfer), some of them are new (barycenter of greophysical data) and all demonstrate the computational efficiency of our method. We hope that those reviews will not negatively impact your final assessment, as we believe they do not comply with a fair scientific and critic evaluation of our work. If possible, we kindly ask for an additionnal emergency review. Thank you again for your work, The Authors [1] Lacoste-Julien, S. and Jaggi, M., 2015. On the global linear convergence of Frank-Wolfe optimization variants. Advances in neural information processing systems, 28. [2] Bonneel N, Coeurjolly D. Spot: sliced partial optimal transport. ACM Transactions on Graphics (TOG). 2019 Jul 12;38(4):1-3. [3] Bai Y, Schmitzer B, Thorpe M, Kolouri S. Sliced optimal partial transport. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 13681-13690). ## Reviewer B66R We thank the reviewer for their feedback. **The paper does not have approximation results for sliced UOT. Approximation results for UOT have been previously established [1], and it would be nice to have such results for sliced UOT as well since UOT can be viewed as an approximation of OT.** Thank you for the reference about the approximation error that we will add to the paper and we agree it would indeed be an interesting future work. However, we stress that our motivation is to define sliced variants that inherit the benefits of Unbalanced OT: comparing positive measures of different mass or discarding outliers. In our work, we define two new losses that inherit the best of sliced and Unbalanced optimal transport, prove that they inherit sliced OT's theoretical guarantees and show empirically that they have a similar behaviour as unbalanced OT. As such, the approximation of OT is not a direct concern in this work. **There is no guarantee on the FW convergence.** We would be grateful if the reviewer could provide more explanations about this comment. As stated in Section 4, our setting verifies the assumptions of Theorem 8 in [2] which guarantees the convergence of the method. Moreover, we reported in Figure 5 of Appendix B empirical evidence of the convergence of the algorithm for few iterations of the Frank-Wolfe algorithm. <!-- We are happy to move the figure to the main text in the discussion related to FW convergence. We hope that we have answered the concerns about the convergence of the FW algorithm and we kindly ask the reviewer to consider raising their score. --> [1] Nguyen QM, Nguyen HH, Zhou Y, Nguyen LM. On unbalanced optimal transport: Gradient methods, sparsity and approximation error. The Journal of Machine Learning Research. 2023. [2] Lacoste-Julien, S. and Jaggi, M., 2015. On the global linear convergence of Frank-Wolfe optimization variants. Advances in neural information processing systems, 28. ## Reviewer t1Dj We thank the reviewer for their feedback. **The contributions might look incremental.** We respectfully disagree with the reviewer regarding the novelty of our contributions. Sliced and unbalanced optimal transport have shown great interest in the machine learning community and combining the two efficiently is a natural question that has recently interested the community [1, 2]. We tackle this question by proposing two new unbalanced OT losses computed in a sliced manner (bringing the cubical computational complexity down to almost linear $O(n \log n)$), study their respective theoretical properties and show their empirical performance. Our thorough theoretical analysis does not follow immediately from previous works and we give more details below. While we rely on previous important relevant works on Unbalanced Optimal Transport, notably to adapt some proof techniques, these adaptations are not straightforward. We first recall that Sejourne et al. (2022) [3] focuses on optimization and introduces a FW approach for UOT between 1D measures. The adaptation of their approach to compute our losses does not follow from a direct, straightforward application of their work. In fact, we had to make several theoretical and methodological contributions which are non-trivial, even in light of [1, 2, 3]; for instance, the duals of SUOT and USOT, which are the core elements of our FW algorithms (Theorem 3.6). Moreover, the literature on slicing unbalanced OT is very scarce: to the best of our knowledge, it was reduced to Bai et al. (2022) [2], whose main contribution is mostly methodological and reduced to one specific choice of divergence. We believe that our analysis considerably complements this work, and in particular, we studied a more diverse set of properties (sample complexity, weak convergence metrization) for a larger class of divergences; and our USOT loss introduces a novel way of combining slicing with unbalancing OT, and has never been studied before. **OPTION 1 We kindly ask the reviewer to reconsider their view on the technical contributions of our paper as we deeply think they are novel and will be interesting to the optimal transport community. We hope that we have answered the reviewer's questions and ask them to consider raising their score. OPTION 2 (vote pour de nico :+1:) Based on the above, we would be grateful if the reviewer could reconsider whether our contributions are incremental and if so, clarify why. We would be happy to answer any remaining questions.** [1] Bonneel N, Coeurjolly D. Spot: sliced partial optimal transport. ACM Transactions on Graphics (TOG). 2019 Jul 12;38(4):1-3. [2] Bai Y, Schmitzer B, Thorpe M, Kolouri S. Sliced optimal partial transport. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 13681-13690). [3] Séjourné, T., Vialard, F.X. and Peyré, G., 2022, May. Faster unbalanced optimal transport: Translation invariant sinkhorn and 1-d frank-wolfe. In International Conference on Artificial Intelligence and Statistics (pp. 4995-5021). PMLR. ## Reviewer 48S7 We thank the reviewer for their comment and for recognizing that our "results only demonstrate the superiority of USOT/SUOT over classic OTs in several applications." **Authors merely followed existing work to look for applications and results are pretty expected.** Our work defines two new unbalanced optimal transport losses computed in a sliced manner. Therefore, we focused on experiments allowing to show the benefits of the unbalanced SOT over the classical SOT that is its direct competitor. Namely, we showed through all 3 experiments that our losses are able to inherit the properties of unbalanced optimal transport (discarding outliers and comparing measures of different mass) while being as fast as sliced optimal transport. Moreover, on the color transfer and on the barycenter problem, we demonstrated that it scales well for large distributions contrary to existing unbalanced OT variants that do not scale with respect to the number of samples. Also note that the barycenter of geophysical data is an application never seen in previous OT papers. **Results in 5. Experiments are weak. The main problem is that authors only compare their SUOT/USOT results with classic OTs. There's no SOTA methods included in their experiments. We wouldn't know the impact of their results to each of the applications.** We find this comment to be quite unjustified. Our aim is to provide a fair comparison between SUOT/USOT and classical OT methods such as SOT or OT, and give reasonable indications that an unbalanced version of sliced Wasserstein is most of the time a good choice for practitionners. We would like to recall that a method does not need to be SOTA to be of interest to the machine learning community and that combining the popular unbalanced and sliced optimal transport variants together is a problem that has shown a recent great interest to the community [1, 2]. Being SOTA on a specific task (e.g. document classification) requires a set of engineering, augmentations, careful choices of metrics/networks, which is beyond the objectives of this paper (i.e. this is **not** a paper about document classification). **First, color transfer, to me, is a toy example.** We respect the reviewer point of view, but the goal of this experiment is to show that we can compute unbalanced OT in a relatively short time on a medium/large scale problem, with real data. The usual practice is to quantize the images before conducting the color transfer as the usual methods used for this type of tasks do not scale well with the size of the distributions. Here, we perform it on full 300x300 images, which demonstrates the scalability of the method contrary to all unbalanced OT losses. Moreover, we show that by being able to discard outliers, it can solve bleeding problems. Regarding SOTA methods, we are not aware of better non-OT approaches that could solve the color grading problem with only two images given as input. **Second, although the tropical cyclones data are real but authors augmented a specific sample and created their dataset. That, to me, is still synthetic data. And although they mentioned their algorithm's efficiency on GPU at the end, they didn't compare it with other methods. The majority of potential readers of this paper including me are not experts on climate models, so it's necessary to include currently popular methods to average climate models, which is missing.** Surprisingly, in the climate models literature, most of considered methods are based on weighted averaging (the Multi-Model Mean described and used in the paper). From a thematic point of view, we believe that considering OT barycenters could be meaningful, but evaluating the relevance of this idea in the context of climate modeling is far beyond the scope of this paper, and as the reviewer acknowledge, would require a different pool of experts. Our goal here is to demonstrate the feasability of this computation, which is currently not permitted with regular UOT solvers for this size of data. **Third, re document classification, it's the same problem where authors only compare their methods with classic OTs. The results cannot demonstrate the "effectiveness" of their USOT/SUOT.** Again the goal of this experiment is not to show SOTA results on document classification (which is not the claim of the paper), but to show that our methods are beneficial wrt. SOT. **If I understand correctly, only $C_1(x,y)=|x-y|$ satisfies the assumptions of all the theories in the paper. If so, authors should emphasize it somewhere since their methods cannot extend to other domains, e.g. spherical domain.** The case $C_1(x,y)=|x-y|$ is the most used in the Sliced Optimal Transport literature. Indeed, for most of the variants which use e.g. different projections such as Generalized SW [3], the projections are done on $\mathbb{R}$. Moreover, for extension on manifolds, it also covers lots of cases. Indeed, if we work on manifolds of non-positive curvature such as hyperbolic spaces or SPD matrices, the projection is actually on $\mathbb{R}$ [4, 5, 6]. This is already illustrated in Appendix C, Figure 9 with an example over the hyperbolic space. Finally for spaces such as the sphere, while in [7], the projection is done on $S^1$ and thus would not enter the theory derived here, further works have derived variants for which the projections are actually done on $\mathbb{R}$ or on $[0,1]$ [8,9,10,11]. Nonetheless, note that the dual theory remains valid on $S^1$, and thus we could also compute the unbalanced version of the Spherical Sliced-Wasserstein of [7] (which is also discussed in the next point). Extending to other domains is discussed briefly in the paper in lines 302-310. **I suspect that was why they had to warp the climate models back to the planar domain.** For the climate models, we considered the two versions: the Spherical SW of [7] and the classical sliced Wasserstein in the planar domain. It turned out that the results were qualitatively mostly similar because the considered geographical area is already very close to a plane, due to its limited extent. We only presented the results on the planar projection to avoid a lengthy description in the paper of the spherical sliced version, which in our opinion was unncecessary for this work. We kindly ask the reviewer to reconsider their view on the scope of our manuscript and to consider raising their score as we think our new losses will be of interest to the optimal transport community. [1] Bonneel N, Coeurjolly D. Spot: sliced partial optimal transport. ACM Transactions on Graphics (TOG). 2019 Jul 12;38(4):1-3. [2] Bai Y, Schmitzer B, Thorpe M, Kolouri S. Sliced optimal partial transport. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 13681-13690). [3] Kolouri S, Nadjahi K, Simsekli U, Badeau R, Rohde G. Generalized sliced wasserstein distances. Advances in neural information processing systems. 2019;32. [4] Bonet C, Chapel L, Drumetz L, Courty N. Hyperbolic sliced-wasserstein via geodesic and horospherical projections. InTopological, Algebraic and Geometric Learning Workshops 2023 2023 Sep 27 (pp. 334-370). PMLR. [5] Bonet C, Malézieux B, Rakotomamonjy A, Drumetz L, Moreau T, Kowalski M, Courty N. Sliced-Wasserstein on symmetric positive definite matrices for M/EEG signals. InInternational Conference on Machine Learning 2023 Jul 3 (pp. 2777-2805). PMLR. [6] Bonet C, Drumetz L, Courty N. Sliced-Wasserstein Distances and Flows on Cartan-Hadamard Manifolds. arXiv preprint arXiv:2403.06560. 2024. [7] Bonet, C., Berg, P., Courty, N., Septier, F., Drumetz, L., & Pham, M. T. (2023). Spherical Sliced-Wasserstein. International Conference on Learning Representations. [8] Rustamov, Raif M., and Subhabrata Majumdar. "Intrinsic sliced wasserstein distances for comparing collections of probability distributions on manifolds and graphs." International Conference on Machine Learning. PMLR, 2023. [9] Quellmalz, Michael, Robert Beinert, and Gabriele Steidl. "Sliced optimal transport on the sphere." Inverse Problems 39.10 (2023): 105005. [10] Quellmalz M, Buecher L, Steidl G. Parallelly sliced optimal transport on spheres and on the rotation group. arXiv preprint arXiv:2401.16896. 2024. [11] Tran H, Bai Y, Kothapalli A, Shahbazi A, Liu X, Martin RD, Kolouri S. Stereographic Spherical Sliced Wasserstein Distances. arXiv preprint arXiv:2402.02345. 2024. ## Reviewer oZqa We thank the reviewer for their reading and comments on our paper. **For instance, sometimes additional assumptions on the penalties (such as $D_{\phi_1}=D_{\phi_2}$ is either KL or TV), and it is unclear if these assumptions are necessary or potentially can be relaxed to allow for arbitrary $\phi$-divergences.** In Theorem 3.3, we can relax the condition $D_{\varphi} =$ KL or TV by the following Lipschitz condition on $(\varphi^\circ \circ f, \varphi^\circ \circ g)$: for $(x,y) \in \mathbb{R}^d \times \mathbb{R}^d$, $|\varphi^\circ \circ f(x) - \varphi^\circ \circ f(y) | \leq L_1 \| x - y \|$ and $|\varphi^\circ \circ g(x) - \varphi^\circ \circ g(y) | \leq L_2 \| x - y \|$, where the constants can depend on the radius $R$ and the measure masses. For KL, see eqs. (41) and (45); for TV, we apply Theorem 3 of Nadjahi et al. (2020) [1], whose proof relies on that Lipschitz condition as well. The assumptions in Theorem 3.4 can be relaxed the same way as for Theorem 3.3, since Theorem 3.4 is a corollary of Theorem 3.3. Regarding Proposition 3.2(ii): Extending this result to more general divergences remains an open, highly non-trivial question. Liero et al. (2018) [2] proved that UOT satisfies the triangle inequality using a conic formulation, which cannot be derived for USOT due to slicing. For TV, we managed to show that triangle inequality holds using an alternative strategy, based on an equivalence with bounded Lipschitz dual norm (which is specific to TV, see Lemma A.9). **I would, however, ideally like to see a more detailed discussion/characterization of the error introduced by approximating $\sigma$ via $\sigma_K=K^{-1}\sum_{k=1}^K \delta_{\theta_k}$. An expected value is estimated for sliced OT and SUOT, so using MC approximation schemes make sense. For USOT, however, the integral approximation is within an optimization problem, and how this approximation impacts the generalization needs to be clarified.** <!-- This is an interesting yet very challenging question: since this boils down to a generalization problem, a natural strategy would be to study the complexity of the "hypothesis class" of our problem. However, this is highly difficult given the specific structure of the unbalanced OT problem and its potential functions (see Theorem 3.6). We leave this question for future work and will add in our paper the following numerical simulation, which empirically explores this question: https://ibb.co/7YRJqCm. In this plot, we compute the approximation error due to the number of slices, by comparing the "groundtruth" (USOT with 10000 slices) and USOT based on a lower number of projections. --> We acknowledge that this particular point is missing a more detailed discussion in the paper. As the reviewer noticed, computing a minimization of a functional defined as an integral over the sphere requires a specific version of the Frank-Wolfe algorithm which is called 'Stochastic Frank-Wolfe' [3] that allows to compute the linearization over a finite sub-sampling of directions over the sphere. Reference [3] provides convergence analysis and also variance reduction strategies that we did not consider in this work. We will definitely complete the justification of our algorithm in light of this work. <!-- **[Kimia: not sure how to reply to this!]** *Fair point* *Ajouter xp projection complexity?* --> **Moreover, finding relevant slices in high dimensions is complicated (see, e.g., page 4 Kolouri 2019). Is this anything you considered?** We focused on the uniform distribution over $S^{d-1}$ as slicing distribution. However, the dual theory works for any empirical distribution thus we could use other slicing distribution. For instance, we could use quasi-Monte Carlo samples [4] to improve the approximation of the integral. Another interesting direction we could consider is to use control variates as in [5, 6, 7]. However, we did not consider it and we leave it as future work. **For the experimental setup, some details are unspecified. Which OT solver is used for the Document classification? If you use Sinkhorn-based methods, which regularization parameter did you choose, and how did you select it?** The OT solver is the one from the library POT. The sinkhorn based method was used to approximate the UOT problem, and the regularization parameter was chosen through a grid search. More details are provided in Appendix C.1.2 for this experiment. **How many slices are needed to obtain satisfactory accuracy? How does this depend on the dimensions of the sample space?** We provided an ablation study of the number of slices for the document classification task in the appendix (Figure 7). For the reported results, we generally used 500 projections. **Besides how USOT and SUOT handle outliers differently, is it possible to say something general about when either is preferred?** Generally, we are interested in handling the outliers of the original distributions. Thus, we would argue that USOT must be preferred. Moreover, we observed that USOT consistently outperformed SUOT, for instance on the document classification task, which reinforces this intuition. **Are there any fundamental aspects that limit the theoretical results to be extended to the general setting when any metric is used, and arbitrary divergences are used as penalties?** We refer to our responses above on how to relax the assumptions in Proposition 3.2 and Theorems 3.3, 3.4. The rest of our theoretical results assume that the cost function and $\varphi$ satisfy mild conditions which are very common in the literature of unbalanced OT (e.g., Proposition A.1). **For Figure 1, should $\rho_1=0.01$ for SUOT. If so, why?** To deal efficiently with the outliers from the first marginal, the parameter $\rho_1$ needs to be tuned. For $\rho_1=1$, it would not discard outliers on the marginal of the projections. Note that the scale between USOT and SUOT are differents, and thus $\rho_1=1$ was small enough for USOT. **In Theorem A.4, I believe it should be $\alpha_\lambda * \psi_\lambda$ and not $\alpha * \phi_\lambda.$** Thank you for spotting this: we will fix the typo. [1] Nadjahi K, Durmus A, Chizat L, Kolouri S, Shahrampour S, Simsekli U. Statistical and topological properties of sliced probability divergences. Advances in Neural Information Processing Systems. 2020;33:20802-12. [2] Liero M, Mielke A, Savaré G. Optimal entropy-transport problems and a new Hellinger–Kantorovich distance between positive measures. Inventiones mathematicae. 2018 Mar;211(3):969-1117. [3] S. J. Reddi, S. Sra, B. Póczos and A. Smola, Stochastic Frank-Wolfe methods for nonconvex optimization, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 2016 [4] Nguyen, Khai, Nicola Bariletto, and Nhat Ho. "Quasi-monte carlo for 3d sliced Wasserstein." arXiv preprint arXiv:2309.11713 (2023). [5] Nguyen, Khai, and Nhat Ho. "Control variate sliced wasserstein estimators." arXiv preprint arXiv:2305.00402 (2023). [6] Leluc R, Portier F, Segers J, Zhuman A. Speeding up Monte Carlo integration: Control neighbors for optimal convergence. arXiv preprint arXiv:2305.06151. [7] Leluc R, Dieuleveut A, Portier F, Segers J, Zhuman A. Sliced-Wasserstein Estimation with Spherical Harmonics as Control Variates. arXiv preprint arXiv:2402.01493.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully