Bang An
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    1
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # NeurIPS Rebuttal ###### tags: `Neurips 22 Rebuttal - Transfer Fairness` ## Response to Reviewer 15g2's new comments Thank you for the valuable points. We address them as follows, and will add them to the main paper. > Concerns about the variance of group accuracy. * We agree that we should introduce the variance of group accuracy at the begining and emphasize the importance of it. We will revise the paper when we have more pages. * The variance of group accuracy can help to avoid trivial fairness. So, we suggest to first compare the variance of group accuracy and then compare the equalized odds. In Table 1, CFair+FixMatch has a very high variance of group accuracy comparing with ours, so the equalized odds is not comparable with ours since it is a trivial fairness. We will add this point to our main paper. * Our practical algorithm is based on our theoretical analysis of the fairness and accuracy under distribution shifts (one important contribution of this paper). The bound of equalized odds (Theorem 4.1) and group accuracy variance (Appendix C) both suggests us to balance the consistency loss across groups as well as minimize the balanced consistency loss. To achieve this goal, we propose a dynamic re-weighting process. **We have compared with a static re-weighting process** in Table 3 which shows a better fairness of our method. DRO-based method might be another solution to balance the consistency loss but it needs many additional efforts to fit in our framework. We would like to emphasize that our algorithm is principle guided. Our contribution on algorithm is not only a dynamic re-weighting process. Even though there might be other solutions for balancing, we are the first to propose to balance and minize the consistency loss to transfer accuracy and fairness. If we consider DRO-based methods and re-weighting methods outside our framework, we are not aware of any methods that can transfer fairness under domain shifts. > Questions about the transformation. * Yes, Contrast, Color and Solarize are all excluded in Table 1. We will add those details in the main paper. Thanks again for the great suggestion on adding Table 9. Thank you for your time and effort on discussing with us. We hope our additional response have addressed your concerns. If so, could you please consider raising the score? We are happy to answer any follow-up questions. Authors ## General Response $\newcommand{Ra}{\textcolor{purple}{FM6i}}$ $\newcommand{Rb}{\textcolor{red}{bbdE}}$ $\newcommand{Rc}{\textcolor{blue}{15g2}}$ We thank all the reviewers for their valuable feedback and insightful questions! We are particularly encouraged that they consider our research problem important (FM6i, bbdE, 15g2), our method novel (FM6i, 15g2), and the paper well-written (FM6i, 15g2). We address individual questions in separate responses. Here, we address one common question and outline the updates to the revised submission based on the reviews. > **Q**: Why does the variance of group accuracy an important fairness metric? **Answer to Q**: In this paper, besides equalized odds, we also use the variance of group accuracy as the fairness metric. The variance of group accuracy is crucial due to the following reasons. * First of all, a small group accuracy variance indicates that the model performs similarly in different groups (where one group is defined as a collection of examples that have same class and sensitive attribute). Similar accuracy indicates that the model treats all the groups the same, which means the model is fair. * Secondly, by looking at equalized odds and group accuracy variance, we can avoid trivial fairness. Fairness that is not based on accuracy is meaningless. In an extreme case where the model has a constant output, the equalized odds is zero, but it is a trivial fairness. By measuring group accuracy variance, we can easily recognize trivial fairness. * Therefore, in our paper, we evaluate the equalized odds and group accuracy variance together. Note that, besides equalized odds, we also bound the group accuracy variance (at the end Appendix C). Both bounds suggest to balance while minimizing the consistency loss which is the design principle of our algorithm. **Paper Updates**: Thanks for the suggestions from reviewers, we have added additional experimental results as follows. (We will move important results to the main paper when we have more pages.) * **[Figure 8 in Appendix E.2]** We plot Pareto frontiers of our method and baselines. Our method gets better Pareto frontiers than baselines suggesting our method is more accurate and fair. (FM6i, 15g2) * **[Table 9 in Appendix E.3]** We investigate the effect of using different transformations in our method by evaluating 14 transformations. (15g2) * **[Figure 9 in Appendix E.4]** We compare our method with another baseline, Laftr+FixMatch, on the NewAdult experiment. Results show that our method outperforms it with a decrease of unfairness in almost all the US states. (15g2) We greatly appreciate the time and effort of all reviewers. We hope our responses and the paper updates can address all the questions and concerns. Please let use know if there are further questions. Authors ## To Reviewer FM6i Thank you for the valuable feedback. We are particularly encouraged that you find our work novel, effective and well-written. Below, we address your questions in detail. --- > **W1**: There is a gap between theory (Theorem 4.1) and proposed algorithms. Theorem 4.1 implies that minimizing the worst-group consistency loss is beneficial to transfer fairness. However, the proposed regularization employs a balanced consistency loss in practice. It is still unclear, empirically or theoretically, why the worst-group consistency loss cannot work well in practice. Additionally, the dynamic weight seems to be important in the proposed regularization, what’s the rationale to design such dynamic weight? **Answer to W1**: From Theorem 4.1, we can see that the unfairness is upper bounded by the worst-group consistency loss, and the error is upper bounded by the all-groups consistency loss. Therefore, to train an accurate and fair model, we need to minimize all-groups consistency loss and the worst-group consistency loss at the same time. This requirement naturally leads to an algorithm that balances the consistency loss across groups as well as minimizes the balanced consistency loss. By doing this, we can find a model that has a small consistency loss for every group, so as to reduce both the error and unfairness. The rationale for designing dynamic weights is to achieve our goal of minimizing the balanced group consistency loss. One straightforward solution is to measure the consistency loss in each group and add them up with the same weight/coefficient. However, besides the loss value, there is another thing we need to consider. Since the consistency loss is measured with pseudolabels and we only consider confident examples when measuring it, the number of examples that used for measuring the consistency loss could be very different for different groups and we need to take it into consideration. For example, there are two groups that have the same consistency loss, but the consistency loss for one group is measured on many examples (i.e. the model is good at this group) while the consistency loss on the other group is measured on just a few examples (i.e. the model is not good at this group). In this case, we should give the latter group more attention by giving more weights to its consistency loss. To achieve this, we weight each group inversely with the number of confident pseudolabels. Thank you for these two questions, we will revise our paper to make the logic and rationale clearer. --- > **W2**: Marginal gain for the proposed algorithm. In figure 3 and table 1, the accuracy-fairness tradeoff performance seems to be marginal compared with baseline Laftr+FixMatch. Current results only provide one single accuracy and EO performance, it is hard to identify the superiority of the proposed method. A similar issue exists in Ablation study part. It would be better to plot the Pareto frontier of different methods with variable hyperparameters. **Answer to W2**: We respectfully do not agree with the marginal gain statement. Compared with Laftr+FixMatch, our method is much more fair. In Figure 3, compared with Laftr+FixMatch, our method reduces about 30% unfairness in the target domain under DShift and 55% under Hshift. In Table 1, compared with Laftr+FixMatch, **our method reduces about 75% unfairness (variance of group accuracy) in the target domain**. Please see the general response for why the variance of group accuracy is crucial. Thanks for the great suggestion of plotting the Pareto frontier. We've added the Pareto frontier of Laftr, Laftr+FixMatch, and ours on UTKFace-FairFace experiment in Appendix E.2. **Our method achieves the best Pareto frontier**, suggesting that our method outperforms others with a better trade-off between accuracy and fairness. We will add this figure to the main paper. --- We again thank you for reviewing our paper and providing suggestions. We hope our answers have addressed all your questions and concerns. Please let us know if there are more questions. Authors ## To Reviewer bbdE Thank you for your valuable feedback. Below, we address the questions and concerns in detail. Hope our answers can address your concerns. --- > **W1**: The authors select one measure of fairness equal opportunity odds, that is very relevant and widely used but without any social/redistributive justice justification. **Answer to W1**: Thanks for pointing this out. The selection of a fairness metric is a common problem in this area which usually requires expert knowledge for a particular application. Without expert knowledge, it is reasonable to use a general fairness metric that is applicable to many real applications. As the reviewer has mentioned, equalized odds is one of the most widely used metrics for classification tasks in existing papers. It encourages the model to achieve similar classification performance in different groups which has the social justice justification in many classification applications. That is why we use it in our paper. Additionally, our theoretical analysis can be extended to other fairness metrics that are based on the difference in group accuracy. For example, we also bound the variance of group accuracy (see Appendix C) with the variance of group consistency loss. This bound also suggests balancing the consistency loss as well as minimizing the balanced consistency loss, which is what our algorithm is doing. We will specify the motivation for using equalized odds and variance of group accuracy in our paper. --- > **W2**: The paper is in general hard to understand. Some of the mathematical formulations are not the traditional ones and they use overly complicated notations that difficult the reading of the paper. **Answer to W2**: We are sorry that the reviewer finds our notation confusing. Since we are dealing with the fairness under distribution shift problem where there are different domains, different classes, different groups, and different distributions, the notations are inevitably more complicated than those in papers that only care about fairness or distribution shifts. Following the notations used in [1] and [2], we've tried our best to simplify the notations. In fact, two other reviewers think our paper is clearly written. But we agree that we can future improve the clarity of the paper. We will add a notation table to help readers follow our notations more easily. > **W3**: The paper is also not self-contained and over relies on the appendix and external work that is not properly introduced. **Answer to W3**: We would like to respectfully argue that we believe the main paper is self-contained. The technique we used in this paper is indeed based on the recent progress of self-training in tackling distribution shifts [1, 2]. We extend their theory by taking fairness into consideration. Due to the page limitation, we defer the proofs, some discussions, and experimental details to the appendix. However, we made sure the main paper is self-contained. Line 42-49 introduces the main idea of [1] and [2], including the expansion assumption and why encouraging consistency can tackle domain shift. Line 50-57 introduces the difference between our work and [1, 2], and explains how we bound the unfairness. Line 58-66 introduces our practical algorithm and the major novelty. Sections 2,3 and 4 contain all the notations, definitions, assumptions, and theoretical results. Section 5 introduces the framework of our algorithm and the novel part of our algorithm - fair consistency regularization. We do not have a detailed introduction of Laftr and FixMatch, since they are two very famous works. Instead, we introduce their intuitions in the main paper and defer their loss functions to appendix D4 due to the page limitation. We will provide more details of these prior works if the page limit permits. Therefore, we believe our paper is self-contained. We would appreciate it if the reviewer could point out the missing piece. It would be very helpful for the improvement of our paper presentation. --- > **W4**: Assumption #1 This assumption makes the problem solvable but in real cases, the assumption does not hold, as one of the most common and challenging types of shift is concept shift, where the data generation process actually changes. This happens more often with tabular data, and distinguishing covariate shift from concept shift is not feasible in many situations. For the folktables data, one could arguably say that predicting US income by training in CA and evaluating has low fairness variations. If the problem has a higher distribution shift e.g. by changing the prediction task to "ACSTravelTime: predict whether an individual has a commute to work that is longer than 20 minutes" training in Hawaii and predicting in the rest of the states, assumption #1 will not hold. How would the proposed method work in this case? Would it flag that is not properly working? Will achieve better results than a baseline. **Answer to W4**: Assumption 1 is very general and holds in many real cases, even in the ACSTravelTime problem. In Assumption 1, we assume that the underlying generative model is fixed, where $P_S(X|Y^{1:K}=y^{1:K})=P_T(X|Y^{1:K}=y^{1:K})$, while the marginal distribution of factors varies. Here, besides the label (e.g. commute time large than 20 minutes or not), the underlying factors also include other factors such as the location, the economic environment, the culture, and so on. All those factors together determine the observed data points. If one considers an incomplete set of latent factors (e.g. only commute time in an extreme case), one can say that the data generation process changes for different cities. However, as long as we consider a complete set of latent factors, the data generation process can be made to be the same in two domains. Therefore, Assumption 1 still holds for the ACSTravelTime problem. Additionally, the same data generation process assumption is also widely used in other papers that study distribution shifts such as [3]. Assumption 1 does not necessarily exclude concept shift. Let's use $Y^1$ to denote the label and $Y^2,..,Y^K$ to denote other factors (we call them nuisance factors). Here, we do not consider sensitive attribute as a factor for simplicity. Since $P(X|Y^1)=\sum_{Y^2,..,Y^K} P(X|Y^1,...,Y^K)P(Y^1,...,Y^K|Y^1)$, under Assumption 1, if the shift is caused by the marginal distribution shift of some nuisance factors (e.g. location), then $P_S(X|Y^1,...,Y^K)=P_T(X|Y^1,...,Y^K)$ and $P_S(Y^1,...,Y^K|Y^1)\neq P_T(Y^1,...,Y^K|Y^1)$, resulting in $P_S(X|Y^1)\neq P_T(X|Y^1)$. That's why we observe that the data from the two cities are so different even though they have the same label and share the same data generation process. $P_S(X|Y^1)\neq P_T(X|Y^1)$ can further be categorized into subpopulation shift and domain shift, as introduced in Section 3. Thus, Assumption 1 is very general. In fact, $P_S(Y^1|X)\neq P_T(Y^1|X)$ can also happen under Assumption 1, which we usually call it concept shift. We would like to clarify that since we also care about accuracy in both domains and it is not reasonable to use one model for two domains in this case if $P_S(Y^1|X)\neq P_T(Y^1|X)$, we also assume there is no concept shift. We will make this clear in our paper. How does our method work? We transfer fairness by encouraging the model to be fair under any nuisance factor values with the consistency loss. By doing so, when the marginal distribution of the nuisance factor changes, we can still maintain fairness. We apply transformations to X to simulate the change of the nuisance factor value. For images, we know many nuisance factors (e.g. light, angle) and we have effective transformation functions. However, it becomes hard for tabular data. The major challenge to transfer fairness on tabular data is to get transformation functions that can simulate, for example, the change of location or economic environment. --- > **Q1**: Other authors have clearly limited what relies upon to be possible or not to predict model performance degradation under distribution shift [1] Do these limitations still apply when evaluating for transferring fairness? This limitation is eluded by stating assumption #1 **Answer to Q1**: As discussed above, Assumption 1 is very general and under which it is possible to transfer fairness. The true limitation relies on the difficulties of transformation functions that simulate the change of nuisance factor values. We have discussed it in appendix F. --- Thanks again for reviewing our paper and giving valuable questions. We hope our answers have addressed all the questions and concerns. If so, we would greatly appreciate it if you could consider raising the score. We are happy to answer any follow-up questions. Authors --- Reference: [1] Wei, Colin, et al. "Theoretical analysis of self-training with deep networks on unlabeled data." ICLR 2021. [2] Cai, Tianle, et al. "A theory of label propagation for subpopulation shift." ICML 2021. [3] Wiles, Olivia, et al. "A fine-grained analysis on distribution shift." ICLR 2022. ## To Reviewer 15g2 Thank you for reviewing our paper and providing valuable feedback. We are glad that you find our paper novel and well-written. We address your questions below in detail. --- > **W1**: The proposed algorithm seems to only work well on synthetic data (Figure 3). When transferred to real datasets, the experiments results are rather weak. > **W1.1**: from UTKFace to FairFace (Table 1), the proposed approach (either with Laftr or CFair) does not achieve the best fairness in terms of equalized odds. For example, Laftr+FixMatch / CFair + FixMathc both achieve good accuracy and lower equalized odds than the proposed approaches. The group accuracy variance does decrease for the proposed approach, but why is that important compared to the fairness metrics? In addition, all the numbers have high standard deviations, are there any significance test done to show the superiority of some of the methods? **Answer to W1.1**: As explained in the general response, variance of group accuracy is very important. By looking at $\Delta_{odds}$ and $V_{acc}$ together, we can **avoid trivial fairness** (i.e., the model tends to predict a constant output, resulting in low equalized odds). In the UTKFace-FairFace experiment, although Laftr+FixMatch also achieves low equalized odds, such fairness is trivial and undesirable --- many examples from class 1 are classified to class 0, as shown in Figure 5 (SCR is using Laftr+FixMatch, and FCR is ours). By measuring group accuracy variance, we can easily recognize trivial fairness. Small group accuracy variance indicates that the model performs similarly for examples from different classes with different sensitive attributes. Comparing with Laftr+FixMatch, **our method achieves similar equalize odds but reduces about 75% group accuracy variance**. As shown in Figure 5, our method (FCR) achieves similar accuracy for four groups where the fairness is non-trivial. We have also evaluated the Pareto frontiers of our method and baselines. Figure 8 in Appendix E.2 shows that our method achieves the best Pareto frontier, suggesting the superiority of our method in trade-off between accuracy and fairness. Comparing with baselines, our method has smaller standard deviation. The high deviation of some of the baselines such as Laftr+DANN and Laftr+MMD are indeed what we observed which is also observed in the synthetic experiment. We suspect that it is because domain adaptation method is less effective and stable for transferring both accuracy and fairness. Thanks for the suggestion, we will add a significance test in our next version. > **W1.2**: on the tabular dataset, the only results shown are in Figure 4. Other than LAFTR, no other baselines are shown, how do we know if the proposed methods work better than other baselines? Even just compared to LAFTR, from the figures (a) and (b) it is very hard to tell if the proposal made the results more fair (as most data points are almost in the same range, and there is no one-to-one correspondence). For Figure 4 (c), there is no significance test either, so it is unclear if the results are significant. **Answer to W1.2**: We have added the result of another baseline (Laftr+FixMatch) to Appendix E.4. Comparing Figure 4 (*a*) and (*b*) is indeed hard since we evaluate the model on all the US states. So, we plot Figure 4 (*c*) to see the improvement. From Figure 4 (*c*) and Figure 9 in Appendix E.4, we can see that Laftr+FixMatch increases the unfairness in more than a half of the states, while our method decreases the unfairness in almost all the states, suggesting the superiority of our method in transferring fairness. We admitted in our paper that our method in this experiments is not as powerful as that for images. That is because self-training methods to tackle distribution shifts highly rely on the effectiveness of transformation functions (or data augmentation). For tabular data, transformation functions are very limited and under explored. This is the limitation of our method, as discussed in appendix F. Finding good transformation functions for tabular data and using them to transfer accuracy and fairness are two orthogonal problems. We focus on the second one. So we believe our method will work better if future work can improve the transformation functions on tabular data. --- > **W2**: The transformation/augmentation is an important piece in the overall framework, but it is not discussed in detail at all in this paper. The authors only briefly mentioned what transformations are excluded in Section 6.3. As the authors reuse existing augmentations designed for robustness (not necessarily for fairness), they should perform a more detailed study on how each of those transformations affect fairness, and if the effect is positive or negative. **Answer to A2**: Thanks for the great suggestion. The transformation indeed plays an important role. Theoretically, as long as the transformation satisfies the intra-group expansion assumption (the basic requirement of which is that the transformation does not change the class or sensitive attribute), we can bound the error and unfairness as shown in Theorem 4.1. So, in our experiments, we select transformations based on this requirements. We would like to clarify that, by doing consistency regularization with those transformations, one can propagate labels from source to target so as to transfer accuracy. By using our proposed fair consistency regularization with a model that is fair in source, we can make different groups to have similar accuracy gains in the target domain, so as to transfer fairness. In section 6.3, we investigate the role of transformation and show that when using weak transformations, the ability of transferring accuracy is limited but our method can still make the transferring process fair. We have added the experimental results of our method with 14 different transformations in Table 9 (Appendix E.3). Different transformations do have different effect on the fairness. We find that *Solarize*, *Color* and *TranslateX* increase the unfairness in the source domain the most, and *Contrast*, *Color* and *Solarize* have the highest unfairness in the target domain. Note that, it does not mean that these augmentations always lead to unfairness but that they are not suitable for our method. Our theory and algorithm are built upon the intra-group expansion assumption. Transformations like *Contrast*, *Color* and *Solarize* may change the sensitive attribute "race" and break this assumption. Thus, in our experiments (Table 1) we use all the transformations excluding them. --- > **Q1=W1**: How does the result in Table 1 show the superiority of the proposed methods on target fairness? From the numbers it is unclear if the proposed method works better than Laftr+FixMatch or CFair+FixMatch. **Answer to Q1 is the same as that to W1.1**. --- > **Q2**: Why do augmentations designed for robustness definitely improve fairness? Would some of the augmentations (other than obvious ones like color jittering) hurt fairness? The authors should perform a more detailed study on this. **Answer to Q2 is the same as that to W2**. --- Thanks again for reviewing our paper and asking good questions. We hope our answers have addressed all your questions and concerns. If so, we would greatly appreciate it if you could consider raising the score. We are happy to answer any follow-up questions. **Example Table** | Model | test acc (%) | gap | |--------|---------|-------| | Base | 82.85 $\pm$ 0.42 | 17.14 $\pm$ 0.42 | | Flip | 88.07 $\pm$ 0.39 | 11.92 $\pm$ 0.39 | | Rotate | 88.61 $\pm$ 0.16 | 11.14 $\pm$ 0.28 | | Crop | **91.38 $\pm$ 0.26** | **8.37 $\pm$ 0.26** |

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully