Tianchen Liu
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # CS 184 Project 1 Writeup Tianchen Liu 3034681144, Riley Peterlinz 3036754368 ## Overview For this project we first rasterize triangles, and then in order to prevent jaggies we implemented super sampling. Then we implemented transforms for `svg` files and barycentric coordiantes for color gradient and texture rendering. Additionally, we implemented Pixel Sampling and Level Sampling to antialias texture rasterization. We had some difficulty implementing super sampling due to integer rounding but we solved it by rounding floats one at time instead of only rounding the final float result. We also had some difficulty with implementing the texture coordinates, getting the scaling between $(x,y)$ and $(u,v)$ correct. ## Task 1 - Walk through how you rasterize triangles in your own words. We check each pixel in the bounding box of the triangle and check if it's within the triangle or not using the method taught in lecture ```cpp! auto L1 = L(x, y, x0, y0, x1, y1); auto L2 = L(x, y, x1, y1, x2, y2); auto L3 = L(x, y, x2, y2, x0, y0); return (L1 >= 0) && (L2 >= 0) && (L3 >= 0); ``` where the `L` function is: ```cpp! float L(float x, float y, float X1, float Y1, float X2, float Y2) { return - (x - X1) * (Y2 - Y1) + (y - Y1) * (X2 - X1); } ``` which is exactly the `L` function taught in lecture. Note that we use `>=` checks to make sure points on the boundaries of the triangle is accounted for as well. Here's the code that iterates through the bounding box of the triangle: ```cpp! for (auto x = min_x; x <= max_x; x++) { for (auto y = min_y; y <= max_y; y++) { if (PointInTriangle(x + 0.5, y + 0.5, x_vec.x, y_vec.x, x_vec.y, y_vec.y, x_vec.z, y_vec.z)) { // Rasterize point } } } ``` We pass in `(x+0.5, y+0.5)` because according to the spec: > Please assume that screen sample positions are located at half-integer coordinates in screen space. That is, the top-left sample point is at coordinate (0.5, 0.5), and the bottom-right sample point is at coordinate (target_w-0.5, target_h-0.5). The method to rasterize point differs depending on if you want to supersample or not, which we will elaborate more in task 2. - Explain how your algorithm is no worse than one that checks each sample within the bounding box of the triangle. It's no worse because checking each sample within the bounding box of the triangle is exactly what we did. We start from the top left corner `(min{x}, min{y})` where `{x}` is the three x values of the triangle endpoints - similar definition goes for `{y}` - and iterate through each pixel until the bottom right corner `(max{x}, max{y})`. - Show a png screenshot of basic/test4.svg with the default viewing parameters and with the pixel inspector centered on an interesting part of the scene. ![](https://i.imgur.com/8YbyXOB.png) Centered on the endpoint of the red triangle: ![](https://i.imgur.com/ZBmvn8V.png) ## Task 2 > Why is supersampling useful? We want to get rid of "jaggies" and other rasterizing artifacts that come with the naive method built in task 1. It also smooths out lines for an overall cleaner image. I found this somewhat counterintuitive, since supersampling microscopically "muddies up" the pixels by adding more . > Walk through your supersampling algorithm and data structures. Supersampling was done by rendering triangles, points, and lines on a canvas `vector<Color> sample_buffer` that is `sample_rate` times larger than the window we are downsampling to, the `framebuffer` whose size is `height * width`. Downsampling occurs by averaging the pixel values in a `sqrt(sample_rate)` x `sqrt(sample_rate)` square from `sample_buffer` into a single pixel of `framebuffer`. We only want to anti-alias the triangles and *not* points or lines since supersampling will capture more resolution for the triangle, smoothing out edges by catching pixels missed at lower sample rates. So, we treat triangles different from points and lines. ###### Triangles The hard part of this is making indexing and scaling consistent, we did this by scaling the triangle's points like so ```cpp! Vector3D x_vec = Vector3D(x0, x1, x2) * sqrt(sample_rate); Vector3D y_vec = Vector3D(y0, y1, y2) * sqrt(sample_rate); ``` This allows us to use virtually the same rasterization as in task 1, except we need to be careful about indexing the rasterized point ```cpp! int sx = (int) floor(x); int sy = (int) floor(y); sample_buffer[sy * (width * sqrt(sample_rate)) + sx] = color; ``` Notice the width is now scaled by `sqrt(sample_rate)` since the whole canvas has been stretched by that factor in both height and width. ###### Points and lines The problem with making the canvas larger is that now objects we don't want to anti-alias points and lines, which will fade if downsample from a large canvas and keep the size of lines and points written to `sample_buffer` as 1. So, we supersample each pixel as well. This is done in the `RasterizerImp::fill_pixel` function: ```cpp! for (int i = 0; i < sqrt(sample_rate); i++) { for (int j = 0; j < sqrt(sample_rate); j++) { int sx = x * sqrt(sample_rate) + i; int sy = y * sqrt(sample_rate) + j; sample_buffer[sy * (width * sqrt(sample_rate)) + sx] = c; } } ``` This makes a square of `sqrt(sample_rate)` x `sqrt(sample_rate)` that downsamples to a single pixel of solid color `c`. > What modifications did you make to the rasterization pipeline in the process? There were two main changes made to `RasterizerImp::set_sample_rate` and `RasterizerImp::resolve_to_framebuffer`. We need to free up more memory in `sample_buffer` when changing the sample rate, so every time `RasterizerImp::set_sample_rate` we make sure to resize the `sample_buffer` using ```cpp! this->sample_buffer.resize(width * height * sample_rate, Color::White); ``` We also need to perform the downsampling mentioned above in `RasterizerImp::resolve_to_framebuffer`. For every `x` and `y` position of the `framebuffer` we average over `sample_rate` pixels in a double for loop: ```cpp! Color col = Color(); for (int i = 0; i < sqrt(sample_rate); i++) { for (int j = 0; j < sqrt(sample_rate); j++) { int sx = x * sqrt(sample_rate) + i; int sy = y * sqrt(sample_rate) + j; auto pos = sy * (width * sqrt(sample_rate)) + sx col += sample_buffer[pos] * (1 / float(sample_rate)); } } ``` ### Results The following shows successively larger sample rates and how they improve the quality of an images by removing jaggies and reducing pixel seperation. sample_rate = 1 ![sample_rate = 1](https://i.imgur.com/o6LiKgV.png) sample_rate = 4 ![sample_rate = 4](https://i.imgur.com/W2Ai088.png) sample_rate = 9 ![sample_rate = 9](https://i.imgur.com/TEUa8mw.png) sample_rate = 16 ![sample_rate = 16](https://i.imgur.com/7XsfGLa.png) In words, increasing the sample rate reduces pixel sample rate because when testing if a pixel is inside a triangle, we only test its very middle infintesimal point, we don't check the whole area of the pixel so we can hit some midpoints while missing others. Supersampling allows us to remove these gaps by considering more points that might be in the display pixel of the triangle we want to rasterize. ## Task 3 > Create an updated version of svg/transforms/robot.svg with cubeman doing something more interesting, like waving or running. Feel free to change his colors or proportions to suit your creativity. Save your svg file as my_robot.svg in your docs/ directory and show a png screenshot of your rendered drawing in your write-up. Explain what you were trying to do with cubeman in words. ![](https://i.imgur.com/jEU1vRB.png) I first rotate his forearm rectangle by 90 degrees, then I translate it slightly more to the bottom and more to the right i.e. instead of ```svg! <g transform="translate(-70 0)"> <g transform="scale(.6 .2)"> ... </g> </g> ``` we have ```svg! <g transform="rotate(90)"> <g transform="translate(-20 50)"> <g transform="scale(.6 .2)"> ... </g> </g> </g> ``` We also changed his head and forearms to skin color. ## Task 4 > Explain barycentric coordinates in your own words and use an image to aid you in your explanation. One idea is to use a svg file that plots a single triangle with one red, one green, and one blue vertex, which should produce a smoothly blended color triangle. Barycentric coordiates looks like the following $$V=\alpha V_a + \beta V_b + \gamma V_c$$ where $\alpha, \beta, \gamma$ describes "how near" the point is to the point $V_a, V_b, V_c$ respectively. Say if $\alpha = 1$, then the point is $V_a$, and if $\alpha=0$, then the point is on the side $BC$. ![](https://i.imgur.com/UJH5DBk.png) Here's a triangle with one red, one green, and one blue vertex. ![](https://i.imgur.com/10n18Fd.png) > Show a png screenshot of svg/basic/test7.svg with default viewing parameters and sample rate 1. If you make any additional images with color gradients, include them. Show a png screenshot of svg/basic/test7.svg with default viewing parameters and sample rate 1. ![](https://i.imgur.com/mDjAOUN.png) Here's another triangle but with different colors! ![](https://i.imgur.com/lYhtTf6.png) ## Task 5 Texture mapping is done by having two triangles: one in screen space $(\vec{x}, \vec{y})$ and the other in texture space $(\vec{u}, \vec{v})$. We can consider vectors $$\vec{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \quad \vec{y} = \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix} \quad \vec{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} \quad \vec{v} = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} $$ We want to transform triangles from screen to texture in order to interpolate over them $$\vec{x} \rightarrow \vec{u} \text{ and } \vec{y} \rightarrow \vec{v} $$ In other words, we want some matrix $M$ so that given a point on the screen in homogeneous coordinates, we transform it to a point on the texture map $$M\begin{bmatrix} x \\ y \\ 1 \end{bmatrix} = \begin{bmatrix} u \\ v \\ 1 \end{bmatrix}$$ We formulated $M$ by considering $$M \begin{bmatrix} x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \\ 1 & 1 & 1 \end{bmatrix} = \begin{bmatrix} u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \\ 1 & 1 & 1 \end{bmatrix} \Rightarrow M = \begin{bmatrix} u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \\ 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \\ 1 & 1 & 1 \end{bmatrix}^{-1} $$ Now that we have M, we can simply apply it to all points within the triangle and find its texture mapping. This sampling of the texture is done in one of two ways: we either take the nearest integer textured pixel to $(u,v)$ where $u,v \in \mathbb{R}$ `Texture::sample_nearest` or we take a linear interpolation $(u,v)$ between integer texture sample locations ` Texture::sample_bilinear`. ###### Nearest Sampling nearest is trivial since we just round our $(u,v)$ coordinates then retrieve the texel. ```cpp! int tx = round(uv.x * (mip.width - 1)); int ty = round(uv.y * (mip.height - 1)); Color col = mip.get_texel(tx, ty); ``` ###### Bilinear Bilinear sampling must linearly sample in both $u$ and $v$ directions. Consider a point $(u, v)$. It is likely located in a bounding box of integer texture sample locations $uv_{00}, uv_{10}, uv_{01}, uv_{11}$ ```cpp! // uv_00 - uv_10 // | uv | // uv_01 - uv_11 ``` Note: the indices correspond to how we round each point $uv_{ij}$ where $i$ is horizontal direction and $j$ is vertical. 0 indicates we take the floor of that axis while 1 indicates the ceiling. We want to first interpolate the color along the horizontals axis, then the vertical. First, we have to ask how far $(u,v)$ is from the bounding box? Denoting `s` for horizontal and `t` for vertical distance away from $u_{00}$ ```cpp! float s = tx - floor(tx); float t = ty - floor(ty); ``` Then, we can use a lerp function to interpolate the color along the top horizontal. The lerp basically takes a weighted average of colors between two points. ```cpp! // uv_00 - uv_10 Color uv_0 = uv_00 + s * (uv_10 + (-1 * uv_00)); ``` then along the bottom horizontal ```cpp! // uv_01 - uv_11 Color uv_1 = uv_01 + s * (uv_11 + (-1 * uv_01)); ``` Finally, we can get the interpolation along the vertical ```cpp! Color color = u_0 + t * (uv_1 + (-1 * uv_0)); ``` #### Results Nearest with `sample_rate = 1` ![](https://i.imgur.com/4rwc75L.png) Bilinear with `sample_rate = 1` ![](https://i.imgur.com/jjrhuGS.png) Nearest with `sample_rate = 16` ![](https://i.imgur.com/1eb8LTU.png) Bilinear with `sample_rate = 16` ![](https://i.imgur.com/oNZwEvN.png) For thin lines in textures we can see that bilinear interpolation does much better than nearest at `sample_rate = 1`. While increasing nearest to `sample_rate = 16` does give comparably smooth results to bilinear at its lowest sample rate, bilinear is a cheaper way of achieving comparable quality. Of course the main benefit with higher sample rate is less jagged edges, while even the bilinear at `sample_rate = 1` has jagged edges. We will see most differences occur when we are zoomed into a texture such as the swirling berkeley logo. Nearest with `sample_rate = 1` ![](https://i.imgur.com/cSZM6Us.png) Bilinear with `sample_rate = 1` ![](https://i.imgur.com/SOG8Plm.jpg) Nearest with `sample_rate = 16` ![](https://i.imgur.com/h0QkYrO.png) Bilinear with `sample_rate = 16` ![](https://i.imgur.com/sHY3JWX.jpg) We can see that the sample rate doesn't really affect nearest or bilinear this since we are so zoomed in supersampling will just take more of the same textels for each pixel. The biggest differrence comes from nearest having very pixely texture and bilinear having a characteristic blurred look. This is such a large difference when zoomed in because bilinear is able to interpolate over a more continuous distribution of texels when they are zoomed in so far. Nearest will keep its chunky look. ## Task 6 Level sampling is an antialiasing technique by which we use a pyramid of resolution for the image called mipmaps. The idea is that at level 0 we have the most detail while higher levels are less detailed. So when an object is "far away" we can use higher mimap levels to reduce aliasing since we have less variation in the texture going over a large swathe of texels. When an image is "close" we can use level 0 because we want the highest detail. Quantitatively, we can find the levels by considering how far apart a pixel is from its neighbors in texture space. This is done by taking a difference between u,v coordinates of a pixel and its right and bottom neighbors ```cpp! Vector3D p3_uv = M * Vector3D(x + 0.5, y + 0.5, 1); Vector3D p3_dx_uv = M * Vector3D(x + 1.5, y + 0.5, 1) - p3_uv; Vector3D p3_dy_uv = M * Vector3D(x + 0.5, y + 1.5, 1) - p3_uv; ``` We can then take the maximum of the difference in texel space between either the right neighbor or the bottom neighbor, then proceed to take its log, to find the mimap level d ```cpp! Vector2D Lx(sp.p_dx_uv.x * (width - 1), sp.p_dx_uv.y * (height - 1)); Vector2D Ly(sp.p_dy_uv.x * (width - 1), sp.p_dy_uv.y * (height - 1)); float L = max(Lx.norm(), Ly.norm()); float d = log2(L); ``` Intuitively, this means the further away texels are from their neighbors, the higher mimap level (lower resolution) we want to avoid aliasing. We now have three ways of antialiasing: pixel sampling, level sampling, or the number of samples per pixel. **Pixel Sampling** - Using bilinear sampling takes almost no extra memory and is only a couple more operations per pixel. Smooths out textures when close up, when far away has comparable antialiasing to Number of Samples Per Pixel, but does not antialias edges. **Level Sampling** - Takes a little extra time than pixel sampling since we have to calculate derivatives, but this is a constant time increase. Doesn't take much more memory since higher levels of mimaps have lower resolution. Works better than other methods for dynamically antialiasing parts of an image with different spatial frequencies (i.e. maps with lines). **Number of Samples Per Pixel** - Much slower than the other two sampling methods since we are quadratically increasing our number of operations with the height and width of each supersampled pixel. Similarly increases takes our memory usage, but it's only way to antialias edges of the three. It does not smooth out textures when zoomed in. ##### Example I will be using this image from Apple Maps on carplay to showcase the power of level sampling because I noticed Apple does not antialias their own traffic lines. ![](https://i.imgur.com/27LjHf7.png) I know that antialising needs to be done before rasterizing, and this image has already been rasterized, but let's have some fun! `L_ZERO` and `P_NEAREST` ![](https://i.imgur.com/kTby6KQ.png) `L_ZERO` and `P_LINEAR` ![](https://i.imgur.com/GRqGhA0.png) `L_NEAREST` and `P_NEAREST` ![](https://i.imgur.com/EHzQEWA.png) `L_NEAREST` and `P_LINEAR` ![](https://i.imgur.com/CFeWVPM.png) I zoomed out so that the level sampling can take full effect, but we can see that `L_NEAREST` makes a drastic difference for the lane line's aliasing compared to `P_LINEAR` whcih simply "smooths out" the aliasing so they're less noticable. Level sampling seems to actually retain the vertical orientation of the lane lines, making its aliasing more pleasing in this case. Still, I think the combination of`L_NEAREST` and `P_LINEAR` does the best in this case since that way the blue circle with the white arrow is also smoothed out.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully