Gaze_Estimation_GazeTR_Gaze360
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Write
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Help
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Write
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Gaze Estimation using Transformer The GazeTR model used by us in this project is not presented and evaluated in the review paper **Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark** but instead is in a seperate paper **Gaze Estimation using Transformer**. But the review paper is used to provide context and understanding for the gaze estimation task and was also the main paper selected for the reproduction project. ## Group 72 Anagha Magadi Rajeev (5729661) (A.MagadiRajeev@student.tudelft.nl) - (Reproduced, Hyperparams check)/ Sebastiaan Wijnands (4668561) (S.C.P.Wijnands@tudelft.nl) - (Reproduced)/ Varun Singh (5441935) (V.Singh-4@student.tudelft.nl) - (Reproduced) In detail contribution overview is given at the end of the blog. ## Introduction Aptly named, gaze estimation concerns itself with determining the vector along which a person is looking. Accurate determination of this vector can provide key information on intentions of the subject, and the application of this newly acquired knowledge is broad. In the past, numerous attempts have been made to obtain accurate estimates of the gaze vector, where methods fall into one of three categories: 3D eye model recovery-based methods, 2D eye feature regression-based methods, and appearance-based methods. The latter does not require highly specialized equipment, requiring no more than a web camera to capture images. The gaze estimation method directly uses these images to learn a function that maps facial appearance seen on the images to estimate the gaze vector directly. The mapping function based on facial characteristics is naturally expected to be complex and non-linear, due to the multitude of factors that affect the facial representation on a 2D image (such as lighting and position of the human in the image). The popularity and application of CNN-based models in this realm is challenged by the authors, through the usage of transforms in this realm, in efforts to obtain superior results. More information about Gaze estimation can be found in the review paper [1], however please keep in mind that there are some inconsitencies in the implementation and the techniques used by the authors and this leads to the motivation of our project and this blog. The aim of this reproducibility project is to challenge the results and claims made by the authors, pertinent to their proposed GazeTR model [2] in estimating the human gaze vector. The existing GazeTR implementation will be used, and applied on the Gaze360 dataset [3] using the preprocessing techniques mentioned in their Github repository to verify whether we can replicate their original angular error numbers. Next we will train their pretrained model on the Gaze360 dataset and then evaluate on its test set. Our expectation is that the results obtained will be much better compared to just using the pretrained model but we aim to verify this as well. While we are doing this we also perform some hyperparameter tuning to perhaps acquire some optimal setting for the parameters. Cases where we have to deviate from what the authors did due to practical constraints are highlighted and justified to the greatest extent possible. In terms of structure, the core content of the blog commences with an overview of the original model (GazeTR architecture). Alongside a rationale for the use of transformers in this context, background on transformers (both pure and hybrid variants) and details on their implementation are provided. The author's specific GazeTR model details and ultimate results conclude this overview. Next, the specific dataset used for the purpose of this reproducibility project is discussed: Gaze360. Next, the approach to attain comparable results is described. The description of the approach rests on three pillars: the pre-processing of the Gaze360 dataset, the new model architecture (hyperparameter specification), and the training details (epochs, loss). With both the GazeTR model (and Gaze360 dataset) specified, together with details on the team’s own reproduced implementation of this model, the final results can be compared and discussed in terms of accuracy. The blog ends with concluding remarks and recommendations for continued work. ## GazeTR Model ### Motivation for using Transformers The GazeTR paper is novel in a way for the problem for gaze estimation since it is the first paper to employ the use of transformers to solve it. Transformers have been shown to have a better ability at capturing global relation and also its been shown to outperform state of the art convolutional networks at certain computer vision tasks like image classification. Confident with the success of transformers at other Computer Vision tasks the authors of the GazeTR paper try to answer the question whether using a transformer is suitable for gaze estimation tasks? For this they use two transformers. The first one, called the pure transformer, splits images into several patches and uses a transformer encoder to predict gaze from these patches. The second type, called the hybrid transformer, first uses a CNN to extract local feature maps from the image and then employs a transformer encoder to estimate gaze from these feature maps. They do this because the consider the patch division of the simple transformer might corrupt the image structure and output poor results for the gaze regression task. To solve this the authors use a convolutional layers of ResNet-18 in the hybrid transformer to learn local feature maps from face images and then use a transformer encode to capture global relations from those maps. ![](https://i.imgur.com/WZmqYU9.png) ##### Fig 1: The GazeTR model uses two transformers. The first one, called the pure transformer, splits images into several patches and uses a transformer encoder to predict gaze from these patches. The second type, called the hybrid transformer, first uses a CNN to extract local feature maps from the image and then employs a transformer encoder to estimate gaze from these feature maps. ### Transformer The transformer’s main module is the self-attention mechanism. Given a feature X, this feature is projected into Queries Q, keys K and values V with multiplayer perceptron layers (MLP). ![](https://i.imgur.com/hI5fvVh.png) The transformer has 3 other components which are multi-head self-attention (MSA), MLP and layer normalization (LN). MSA extends the self-attention module into multiple subspaces by linearly projecting the queries, keys and values N times with different linear projections where N is the number of heads. The output values are concatenated and linearly projected to form the final output. A two-layer MLP is applied between MSA layers and the LN is used for stable training and faster convergence. ### Pure Transformer The input is a face image which is divided into patches and then each patch is flattened as a feature vector. An extra token is added as a learnable embedding which during training aggregates the features of other patches with self-attention and outputs the gaze representation. ![](https://i.imgur.com/S5lMzo7.png) Information regarding the position of each patch is also stored and then is fed into the transformer as a feature matrix. The transformer outputs a new feature matrix from which the first feature vector is selected and then using a MLP to regress gaze from the gaze representation. ![](https://i.imgur.com/pEhGbmP.png) ### Hybrid transformer The hybrid transformer consists of a CNN and a transformer where the CNN is used to extract the local features from images and after convolution each feature contains the information of a local region. This feature matrix is then fed into a transformer to capture global relations. This process of estimating the gaze representation is almost the same as in the Pure transformer except here we use a CNN to process the face images and acquire feature maps. The obtained gaze representation is then regressed into a human gaze as described in the Pure transformer. ### GazeTR Model Details and Results The authors used 224x224x3 images for training as input data and their GazeTR model outputs a 2D vector which contains the yaw and the pitch of the gaze. They used the L1 loss as their loss function. The ETH-XGaze [4] dataset was used for training the model and then it was evaluated on four datasets which were the MPIIFaceGaze, EyeDiap, Gaze360, and RT-Gene. They followed the preprocessing mentioned in the review paper for the dataset processing. The metric used to evaluate the performance was angular error and this value should be as small as possible. ![](https://i.imgur.com/SHCHEWs.png) ##### Table 1. Shows the results of the GazeTR model with the state of the art. GazeTR-Hybrid is able to show state of the art results. From the above table 1 it can be seen that the GazeTR-Pure model cannot achieve competitive results but the GazeTR-Hybrid model is able to outperform the state of the art in all comparisons. Although the GazeTR-Pure model which only has the transformer is not able to achieve state of the art results it does come close and indicates the potential of transformers in gaze estimation. In the Results and Disucssion section we are going to compare these numbers with our own numbers and ## Gaze360 Dataset The Gaze360 dataset is a dataset that includes 3D gaze annotations, a wide range of gaze and head poses, various indoor and outdoor capture environments, and a diverse group of subjects. It is only surpassed in the number of subjects by the GazeCapture dataset and is the first to provide these qualities for short continuous videos at 8 Hz. ![](https://i.imgur.com/kExXweX.png) ##### Fig 2: Gaze360 dataset samples: showing the diversity in environment, illumination, age, sex, ethnicity, head pose and gaze direction. Top: full body crops; bottom: closer-up head crops. Yellow arrows show measured ground-truth gaze. The dataset consists of 238 subjects in five indoor and two outdoor locations over nine recording sessions. In total, the dataset contains 129K training, 17K validation, and 26K test images with gaze annotation. The subjects' ages, ethnicities, and genders are diverse, with 58% female and 42% male subjects. The dataset covers the entire horizontal range of 360 degrees and allows for gaze estimation up to the limit of eye visibility. The vertical range is limited by the achievable elevation of the marker. Sampling is less dense in the rear region due to occlusion of the target board by the subjects. To validate the accuracy of the gaze annotations, a control experiment was conducted. The mean difference between both gaze labels was 2.9 degrees over three recordings of two subjects, which is within the error of appearance-based eye tracking at a distance, validating the acquisition procedure as a means of collecting an annotated 3D gaze dataset. ## Approach ### Preprocessing of dataset We used the pre-processing codes in Gazehub for the Gaze360 dataset [5]. We used the Gaze360 dataset in its original form, which had already been partitioned into training, testing, and validation sets. Nevertheless, it is important to mention that certain images in the dataset only captured the back of the subject, making them unsuitable for appearance-based methods. To address this, we applied a basic filtering rule to clean the dataset by eliminating any images lacking face detection results, as specified in the accompanying face detection annotations. The pre-processing code takes as inputs the head pose and eye gaze vectors of a person in an image and returns the eye image cropped, the transformed image and a set of parameters used in the transformation process. The output of the final preprocessing is cropped images of the size 224x224 for both head and the face as train, test, val, and usused directories. Log files are created which contain the path locations of the images for the train, test, val and the unused set. These log files and the face images are used for the evaluation of the pretrained model, training and the fine-tuning. ### New model architecture GazeTR uses transformers, specifically hybrid transformers, for gaze estimation tasks. The hybrid transformer combines the local feature maps learned by a ResNet-18 with a transformer encoder to capture global relations. A pre-trained model was provided by the original authors of the paper. The pre-training dataset used was ETH-XGaze, consisting of 1.1 million high-resolution images of 110 subjects captured in an indoor environment. **Hyperparameters used:** 1. Batch size: 512 2. Epochs: 50 3. Learning Rate: 0.0005 4. Weight Decay: 0.5 5. Adam Optimizer: Beta1 = 0.9, Beta2 = 0.999. 6. Number of Attention Heads (N) = 4. Reproduction: In order to replicate their results, we directly used the pre-trained model to evaluate our dataset. Since there were no details provided on the method of training and evaluation, our assumption was that the pre-trained model was used directly, with the above hyperparamters. Training: The above hyperparameters were used to train the model from scratch on the Gaze360 training set without any pre-training. This is referred to as Custom Model-1. Apart from this, we performed experiments to understand: **1. Hyperparameter tuning:** To understand how the model's performance varies based on the number of attention heads in the encoder, we set N = 8. We kept the rest of the hyperparameters as described above and trained the model. This is referred to as Custom Model - 2 and the results of this study are detailed in the next section. **2. Impact of pre-training:** With N = 8, we trained the model in two ways. The first one was done without using the pre-trained model as a baseline for training and is described above as Custom Model - 2. The Finetuned Model applied transfer learning where we used the pre-trained ETH-XGaze model as a baseline for training. ### Evaluation Evaluation was done using angular error as the evaluation metric, with smaller error indicating better performance. Most of the learning parameters were inherited either from the pre-training settings or from the details given in the paper, with some parameters adjusted for each evaluation dataset. Our setup was on a google colab instance via GCP. The instance was a n1-highmem-2 and had 1 NVIDIA T4 GPU. We used the pytorch framework since the original authors used that as well for their implementation. The exact specifications and the steps needed to run our setup are explained in much more detail in the github repository accompanying this reproduction [6]. ## Results and Discussion The angular error results using the GazeTR authors' model will be presented first, and then the results obtained by them in the original paper will be compared with the results obtained in our (assumed) replication. We train our model from scratch (without the pre-trained model) and observe the results. Thirdly, we tune the hyperparameters and use the pre-trained model to train on the Gaze360 dataset, and the performance on the test set is measured. It is expected that an upgrade in the performance results, specifically the angular error, will be observed. The models that were trained by us used the hyperparameters that gave optimal results, and these hyperparameters are described in the section above. | No | Methods | Gaze360 Dataset Angular Error | Comments | | --- | ----------------------------- | ----------------------------- | ---------------------------------------------------------------------------------- | | 1 | GazeTR-Pure | 13.58◦ | Original angular error numbers from the GazeTR paper | | 2 | GazeTR-Hybrid | 10.62◦ | Original angular error numbers from the GazeTR paper | | 3 | GazeTR-Hybrid (Reproduction) | 26.51◦ | The angular numbers obtained from our reproduction of their (assumed) model | | 4 | Custom Model - 1 | 10.23◦ | The model that was trained from scratch directly on the Gaze360 dataset without pre-training (N = 4) | | 5 | Custom Model - 2 | 10.15◦ | The model that was trained from scratch directly on the Gaze360 dataset without pre-training (N = 8) | | 6 | Finetuned Model | 9.05◦ | The pretrained model used as baseline to train the Gaze360 dataset (N = 8) | ##### Table 2. Shows the results of the GazeTR model with our custom trained model and also the results of our experiments of directly using their provided pretrained model The first two rows in table 2 highlight the angular error obtained by the GazeTR paper authors on the Gaze360 dataset. Their model was trained only on ETH-XGaze dataset and then directly evaluated on the Gaze360 dataset and other datasets as well but we are only interested in the Gaze360 dataset for this reproduction. Interestingly in our reproduction of the GazeTR-Hybrid model for the Gaze360 dataset we get an angular error of 26.51◦ which is much higher than the quoted number of 10.62◦ by the authors. Ideally the results from our reproduction and the authors should have been almost the same. The techniques for the setup of the GazeTR repository, the preprocessing of the dataset and the implementation were all followed from the GazeTR Github repository and no deviation was done. However the repository itself is not well documented, updated or organised. We had to frequently consult the authors phi-lab [5] website as well for the dataset preprocessing so it is possible that a crucial step for the setup or the preprocessing was missed. Another possibility is that we might have not performed the same training steps or same kind of evaluation as the authors did since there were no such details mentioned in the paper, which left a lot of room for ambiguity. We also tuned the hyperparameter N (Number of attention heads) and evaluated the impact of pre-training through the custom and finetuned models. Custom model - 1 had 4 attention heads and was trained from scratch on the Gaze360 dataset and evaluated on test set. The angular error obtained through this was 10.23◦ which is marginally lower than the authors published numbers which was 10.56◦. Custom model - 2 had 8 attention heads and was trained from scratch on the Gaze360 dataset and evaluated on test set. The angular error obtained through this was 10.15◦ which is lower than the authors published numbers which was 10.62. These results show that the training and evaluation implementation is working and seems to be correctly setup by us which makes the inconsistency in the angular error obtained by us in our reproduction not coming close to the authors even more peculiar. It seems like the numbers in Custom Model - 2 are a more accurate reproduction of the author's original numbers but the lack of details imply that we cannot verify this. Finally the finetuned model had 8 attention heads, used the pre-trained model as a baseline and was trained on the Gaze360 dataset and then evaluated on it's test set acheived the best angular error of 9.05◦. ## Conclusion The goal to obtain model predictions following a reimplementation of the GazeTR model (proposed by the authors) has been achieved. The hybrid transformer outperforms pure transformers and current state-of-the-art methods in popular benchmarks with pre-training, demonstrating the potential of integrating transformers for larger performance improvements in gaze estimation. It is to be expected that the results obtained following the reimplementation are nearly identical to the values quoted by the authors. However, the ultimate results obtained are in stark contrast to this hypothesis. The angular error is significantly higher, more than double. Naturally, this raises questions about the quality of the paper which lacks essential details regarding the training and evaluation methods used for each of the datasets. On the flipside, potential errors made during the reproduction of the results by our team should also not be entirely dismissed. Regardless, a discrepancy between both results requires further investigation to determine the exact model performance and reveal the potential of transformer application in this domain once and for all. ## Challenges The GazeTR paper skips a lot of important detail required to reproduce their results as accuractely as possible. For example they mention in section 4 that they preprocessed the datasets as described in the review paper but don't provide anymore detail or explanation for it and instead they just cite the review paper. This kind of writing ignores a lot of detail and makes it difficult for us and researchers to reproduce their results. The GazeTR github repository is unorganised with incomplete documentation and was last updated 2 years ago. Due to this initially we had a lot of problems setting up the repository on the Google cloud colab platform took some time. But we were able to setup everything as a team. ## Task Contribution Overview | Member | Specific Contribution | | -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | | Anagha Magadi Rajeev | - Successfully setup the cloud environment on GCP. - Preprocessed the dataset. Created the setup for the training and evaluation of the model. - Hyperparameter tuning. - Blog: Gaze360 dataset, Approach and Discussion.| | Sebastiaan Wijnands | - Literature review and survey. - Assisted in the setup of the cloud environment. - Assisted in the preprocessing of the dataset. - Blog: Introduction and Conclusion.| | Varun Singh | - Assisted in the setup of the cloud environment. - Initial work setup and exploration of the GazeTR Github repository. - Preprocessing of the dataset. - Assisted in the Hyperparameter tuning. - Blog: GazeTR model, Results and Discussion.| ## References [1] Cheng, Y., Wang, H., Bao, Y., & Lu, F. (2021). Appearance-based gaze estimation with deep learning: A review and benchmark. arXiv preprint arXiv:2104.12668. [2] Cheng, Y., & Lu, F. (2022, August). Gaze estimation using transformer. In 2022 26th International Conference on Pattern Recognition (ICPR) (pp. 3341-3347). IEEE. [3] Kellnhofer, P., Recasens, A., Stent, S., Matusik, W., & Torralba, A. (2019). Gaze360: Physically unconstrained gaze estimation in the wild. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6912-6921). [4] Zhang, X., Park, S., Beeler, T., Bradley, D., Tang, S., & Hilliges, O. (2020). Eth-xgaze: A large scale dataset for gaze estimation under extreme head pose and gaze variation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16 (pp. 365-381). Springer International Publishing. [5] For 3D Gaze Estimation - GazeHub@Phi-ai Lab. (n.d.). https://phi-ai.buaa.edu.cn/Gazehub/3D-dataset/#gaze360 [6] Varunsingh. (n.d.). GitHub - varunsingh3000/GazeTR: Reproduction project for the CS4240 Deep Learning 2022/23 course at TU Delft. GitHub. https://github.com/varunsingh3000/GazeTR

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully