Tianchen Liu
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # CS 184 Project 3-1 Writeup Tianchen Liu 3034681144, Riley Peterlinz 3036754368 Link: https://cal-cs184-student.github.io/project-webpages-sp23-CardiacMangoes/proj3-1/index.html ## Overview In Part 1, we first generate rays that shoots through the camera into the scene Then we calculate intersections between the rays and the primatives. We write methods to cacluate ray-triangle intersection and ray-sphere intersection. Then in Part 2, we implement BVH optimization to speed up calculations of ray-primative intersections by preprocessing the primatives into bounding boxes. Then in Part 3, we deal with illumination and implement direct lighting. We implement direct lighting in two ways: uniform hemisphere sampling and lighting sampling. As we can see, lighting sampling creates less noisy pictures. For part 4, we implemented multiple bounces for indirect lighting. Then, for part 5 we used adaptive sampling to intelligently sample different parts of the image depending on the variance of their sampled distributions, leading to faster rendering times for high sampled scenes. ## Part 1 > Walk through the ray generation and primitive intersection parts of the rendering pipeline. For each pixel on the screen, the camera generates a ray that "shoots through the pixel from the camera". For each location on the screen, we first imagine there's a virtual camera sensor in the world, and each location on the screen would translate to a location/coordinate on the censor. Then we translate a pixel to that coordinate on the virtual censor. Then we generate a ray that starts from the camera position, shoots in the direction of coordinate in the virtual censor. The origin is the position of the camera, the direction vector is the vector that starts from origin and ends in the location of the corresponding location on the sensor (but normalized). We assign the max_t and the min_t. ![](https://i.imgur.com/1fylUvB.png) The code to do it is as follows: ```cpp! // vec is direction vector auto vec = Vector3D(tan(PI/180*hFov/2)*(x-0.5)*2, tan(PI/180*vFov/2)*(y-0.5)*2, -1); vec.normalize(); auto ray = Ray(pos, c2w*vec); ray.max_t = fClip; ray.min_t = nClip; return ray; ``` Now we know how to shoot a ray from the camera to a location on the vitual censor, for every pixel on the image space, we sample `ns_aa` rays that shoots through that pixel (or the equivalent to where that pixel would be in the virtual sensor) from the camera. ```cpp! int num_samples = ns_aa; // total samples to evaluate Vector2D origin = Vector2D(x, y); // bottom left corner of the pixel auto sampler = UniformGridSampler2D(); Vector3D ret; for (int i = 1; i <= num_samples; i++) { auto sample_point = origin + sampler.get_sample(); // Vector2D normalized_sample_point = {sample_point.x / sampleBuffer.w, sample_point.y / sampleBuffer.h}; auto ray = camera->generate_ray(sample_point.x / sampleBuffer.w, sample_point.y / sampleBuffer.h); ret += est_radiance_global_illumination(ray); } sampleBuffer.update_pixel(ret * (1.0/num_samples), x, y); sampleCountBuffer[x + y * sampleBuffer.w] = num_samples; ``` > Explain the triangle intersection algorithm you implemented in your own words. We first implement the Moller-Trumbore algorithm taught in lecture ```cpp! tuple<double, double, double> MollerTrumbore(const Ray &r, Vector3D p1, Vector3D p2, Vector3D p3) { auto E1 = p1 - p3; auto E2 = p2 - p3; auto S = r.o - p3; auto S1 = cross(r.d, E2); auto S2 = cross(S, E1); auto coeff = 1.0 / dot(S1, E1); auto t = coeff * dot(S2, E2); auto b1 = coeff * dot(S1, S); auto b2 = coeff * dot(S2, r.d); return {t, b1, b2}; } ``` This algorithm takes in a ray and three vertices of the triangle and returns the `t` for the intersection of the ray and the triangle as well as the barycentric coordinates `b1` and `b2`. We first check the `t` returned is within `min_t` and `max_t` of the ray: ```cpp! if (!(0 <= t && r.min_t <= t && t <= r.max_t)) { // Ray does not shine on plane return false; } ``` Now we check if the barycentric coordinates are valid (inside triangle): ```cpp! if (!(0 <= b1 && b1 <= 1 && 0 <= b2 && b2 <= 1 && b1 + b2 <= 1)) { return false; } ``` If so, we set the `max_t` of all the ray to be `t` and assign values to the intersection accordingly before returning true. ```cpp! r.max_t = t; // The following is not needed if isect->t = t; // surface normal isect->n = b1 * n1 + b2 * n2 + (1 - b1 - b2) * n3; isect->primitive = this; isect->bsdf = get_bsdf(); return true; ``` > Show images with normal shading for a few small .dae files. Cow: ![](https://i.imgur.com/ewFP2Lt.png) Beetle: ![](https://i.imgur.com/rV0vOsj.png) Teapot: ![](https://i.imgur.com/msARC13.png) ## Part 2 > Walk through your BVH construction algorithm. Explain the heuristic you chose for picking the splitting point. We use the average centroid method to find the splitting point. - We first compute the average centroid of all bounding boxes of primatives, - then we find the longest axis, - Then we group primatives based on which side of the average centroid the primative is on along the longest axis Here's our code. See comments for explanation/walkthrough. ```cpp! BBox bbox; int cnt = 0; Vector3D average_centroid; // Compute average centroid: for (auto p = start; p != end; p++) { cnt++; BBox bb = (*p)->get_bbox(); bbox.expand(bb); average_centroid += bb.centroid(); } // If the number of primatives in this level of recursion is less than max_leaf_size, then this is the base case and we should return a BVHNode with null L and R pointers that corresponds to this bounding box and these primatives. if (end - start <= max_leaf_size) { auto *node = new BVHNode(bbox); node->start = start; node->end = end; return node; } // Decide what's the longest axis of the current bounding box. average_centroid /= cnt; double delta_x = bbox.extent.x; double delta_y = bbox.extent.y; double delta_z = bbox.extent.z; bool x_longest = (delta_x >= delta_y && delta_x >= delta_z); bool y_longest = (delta_y >= delta_x && delta_y >= delta_z); bool z_longest = (delta_z >= delta_x && delta_z >= delta_y); // If the primative is on the "<=" side, then put it in the left vector. // Otherwise put it in the right vector vector<Primitive *> left; vector<Primitive *> right; for (auto p = start; p != end; p++) { auto centroid = (*p)->get_bbox().centroid(); if (x_longest && centroid.x <= average_centroid.x || y_longest && centroid.y <= average_centroid.y || z_longest && centroid.z <= average_centroid.z) { left.push_back((*p)); } else { right.push_back((*p)); } } // Reorder the primatives vector so that all left primatives come before right primatives. for (int i = 0; i < left.size(); i++) { *(start+i) = left[i]; } for (int i = 0; i < right.size(); i++) { *(start + left.size() + i) = right[i]; } // Assign start and end pointers of the node. The left node should be constructed with primatives in the left vector, and the right node should be constructed with primatives in the right vector. auto *node = new BVHNode(bbox); node->start = start; node->end = end; node->l = construct_bvh(start, start + left.size(), max_leaf_size); node->r = construct_bvh(start + left.size(), end, max_leaf_size); return node; ``` > Show images with normal shading for a few large .dae files that you can only render with BVH acceleration. > ![](https://i.imgur.com/k4T5Mb9.png) ![](https://i.imgur.com/9Q8ufKB.png) ![](https://i.imgur.com/WnYcLJg.png) ![](https://i.imgur.com/bR3baaL.png) > Compare rendering times on a few scenes with moderately complex geometries with and without BVH acceleration. Present your results in a one-paragraph analysis. The time it takes to collect primatives is roughly the same for both methods (around 0.0011 seconds). The time it takes to build BVH from primatives is, of course, longer for BVH, (0.0027 seconds for BVH vs 0.0001 seconds for naive solution). But this difference of time is negligible in the grand scheme of things. The time it takes to render is a lot longer for the naive method. It takes on average 63.8475 seconds without BVH acceleration and 0.0704 seconds with BVH acceleration. The average rays/second is of course, inverse to this ratio because the amount of rays stays the same. This is because the average intersection tests for a ray is drastically decreased. Without BVH acceleration it does 2.95974625 intersection tests per ray on average, but with BVH acceleration it does 1367.904394 intersection tests per ray on average. ## Part 3 > Walk through both implementations of the direct lighting function. Show some images rendered with both implementations of the direct lighting function. Hemisphere sampling: We obtain the intersection between the ray and the primative first. Then we sample `num_samples` directions uniformly at random on the hemisphere in the direction of the normal of the intersection. We then generate a ray that starts from the intersection and shoots in this direction. We then obtain the emission from the new intersection (see if this reflection ray intersects a light source), and if so, the total should add the emission obtained from the camera from this ray and its reflection using this reflection equation: https://cs184.eecs.berkeley.edu/sp22/lecture/13-23/global-illumination-and-path-tra .The `sample.z` is the `cos(theta)` in this case. Then we divide the total summation of results by the number of samples. ```cpp! Vector3D result; for (int i = 1; i <= num_samples; i++) { auto sample = hemisphereSampler->get_sample(); auto ray_after_bounce = Ray(hit_p, o2w * sample, r.max_t); ray_after_bounce.min_t = EPS_F; Intersection isect_after_bounce; bool intersected = bvh->intersect(ray_after_bounce, &isect_after_bounce); if (intersected) { result += sample.z * isect_after_bounce.bsdf->get_emission() * isect.bsdf->f(w_out, sample); } } return result / num_samples; ``` Importance sampling: For importance sampling, we iterate over the lights in the scene and test intersection between our ray and the light. If there are no colisions, we sample from that light. This helps beecause now we are only sampling rays that would hit a light and thus the result has less noise since we are not sampling empty space that does not contribute light. For area lights we sample `ns_area_light` times, for point lights we only sample once since the ray to that light is the only ray that would hit from the point light. The amount of light being collected each sample is scaled by the bsdf and the angle between the light's normal and the normal of the object at the point we are tracing. We also divide by the pdf of the sample for how important it is to the distribution. ```cpp! for (auto light: scene->lights) { Vector3D wi; // direction between p and light source double distToLight; // distance between p and light source double pdf; // probability density function evaluated at the wi direction Ray ray_after_bounce; float costheta; // angle between light source's and the ray shooting from p to light source Vector3D L; for (int s = 0; s < ns_area_light; s++) { Vector3D sample = light->sample_L(hit_p, &wi, &distToLight, &pdf); wi.normalize(); auto wi_object = w2o * wi; costheta = dot(wi, isect.n); ray_after_bounce = Ray(hit_p, wi, 1); // assigns the location hit_p and direction wi of the next ray ray_after_bounce.max_t = (double) distToLight - EPS_F; ray_after_bounce.min_t = EPS_F; Intersection isect_after_bounce; // if ray hits light, add luminance if (!(bvh->intersect(ray_after_bounce, &isect_after_bounce)) && costheta > 0) { L += sample * isect.bsdf->f(w_out, wi) * costheta / pdf; } if (light->is_delta_light()) { break; } } if (light->is_delta_light()) { L_out += L; } else { L_out += L / (ns_area_light); } } return L_out; ``` <!-- Importance: ![](https://i.imgur.com/XmZYJV2.png) Hemisphere: ![](https://i.imgur.com/dENaysa.png) --> Importance: ![](https://i.imgur.com/8liEXV9.png) Hemisphere: ![](https://i.imgur.com/q314Mje.png) Importance: ![](https://i.imgur.com/upweG0x.png) Hemisphere: ![](https://i.imgur.com/zyLFCPU.png) Importance: ![](https://i.imgur.com/qOSgql7.png) Hemisphere: N/A Because we can't render point light source with Hemisphere sampling > Focus on one particular scene with at least one area light and compare the noise levels in soft shadows when rendering with 1, 4, 16, and 64 light rays (the -l flag) and with 1 sample per pixel (the -s flag) using light sampling, not uniform hemisphere sampling. One light ray: ![](https://i.imgur.com/5o9ZQuL.png) 4 light ray: ![](https://i.imgur.com/rT9CDXl.png) 16 light ray: ![](https://i.imgur.com/6QTV3Tm.png) 64 light ray: ![](https://i.imgur.com/cbjiCX1.png) The noise level decrease very noticably and the soft shadows gets much more realistic as the light rays amount goes up. > Compare the results between uniform hemisphere sampling and lighting sampling in a one-paragraph analysis. First of all, uniform hemisphere sampling cannot render point light source. For area light source, hemisphere sampling generally produces a much more noisier image that's somewhat darker because of the noise. The shadows for lighting sampling is smoother because of less noise as well. In general, lighting sampling creates a much more realistic image. ## Part 4 For Indirect Lighting we let a ray of light continuously bounce around a scene. The zero bounce is any emissive material. The first bounce is "direct lighting" from the light source onto the objects in the scene. Any more bounces is "indirect lighting" that occurs from light that bounces off a material and still has energy to illuminate other materials. We can sum the direct emission with direct and indicet lighting like so: ```cpp! Vector3D PathTracer::est_radiance_global_illumination(const Ray &r) { Intersection isect; Vector3D L_out; if (!bvh->intersect(r, &isect)) return L_out; L_out = zero_bounce_radiance(r, isect); L_out += at_least_one_bounce_radiance(r, isect); return L_out; } ``` For indirect lighting, we recursively sample from the prior bounce's location and direction. To make a proper estimator for the indirect lighting `L`, we have to first evaluate the BSDF's reflectance `f` along with the a cosine term using the direction given from our sample `wi`, and covert those direction to world coordinates. We also scale by the `pdf` of the `sample_f` and the `cpdf` which is our normalization for russian roullete. The russian roullete randomly terminates the number of bounces. ```cpp! Vector3D PathTracer::at_least_one_bounce_radiance(const Ray &r, const Intersection &isect) { Matrix3x3 o2w; make_coord_space(o2w, isect.n); Matrix3x3 w2o = o2w.T(); Vector3D hit_p = r.o + r.d * isect.t; Vector3D w_out = w2o * (-r.d); Vector3D L_out; L_out = one_bounce_radiance(r, isect); double cpdf = 0.8; Vector3D wi; double pdf; if (coin_flip(cpdf) && r.depth > 1) { Vector3D f = isect.bsdf->sample_f(w_out, &wi, &pdf); float costheta = wi.z; Ray ray_after_bounce = Ray(hit_p, o2w * wi); ray_after_bounce.depth = r.depth - 1; ray_after_bounce.min_t = EPS_F; Intersection isect_after_bounce; if (bvh->intersect(ray_after_bounce, &isect_after_bounce) && costheta > 0) { Vector3D L = at_least_one_bounce_radiance(ray_after_bounce, isect_after_bounce); L_out += L * f * costheta / pdf / cpdf; } } return L_out; ``` ### Results Here are two results rendered at 1024 samples ![](https://i.imgur.com/b77JqZN.png) ![](https://i.imgur.com/zYrWpoU.png) The other results are the CBbunny.dae which is in the next sections ### Light Bounce Progression Here we show the progression of the first three light bounces individually for `CBbunny.dae` 0th Light Bounce ![](https://i.imgur.com/wYrRcmM.png) 1st Light Bounce (Direct Lighting) ![](https://i.imgur.com/n3YlTX9.png) 2nd Light Bounce (Inderect Lighting) ![](https://i.imgur.com/lciLXnK.png) Indirect lighting has a lot less light since it only comes frrom diffuse sources, not direct illumination. ### Increasing Depth Now, we can show how increasing the depth of bounces allows for more and more realistic lighting. I altered russian roulette to have a 100% chance for these renders continuing. Depth = 0 ![](https://i.imgur.com/IqI53Lk.png) Depth = 1 ![](https://i.imgur.com/8ipNudh.png) Depth = 2 ![](https://i.imgur.com/gsOu4Qj.png) Depth = 3 ![](https://i.imgur.com/1n2RZpW.png) Depth = 100 ![](https://i.imgur.com/3lzYbeb.png) Notice that the result plateaus as depth increases. This is because the materials in the scene are all lambertian, making the energy of the light very weak so by the 100th bounce there is little light left to illuminate the scene. We also increased the russian roullete progressively so that we are more likely to reach higher numbers of bounces. i.e `cpdf = 0.99` for 100 bounces. ### Sample-per-Pixel Rates Here we vary samples to show how noise reduces. We use 4 light rays. 1 Sample ![](https://i.imgur.com/2ETyJ0D.png) 2 Samples ![](https://i.imgur.com/iCytEai.png) 4 Samples ![](https://i.imgur.com/3IDlk1M.png) 8 Samples ![](https://i.imgur.com/kaHhfeX.png) 16 Samples ![](https://i.imgur.com/xNcyuYa.png) 32 Samples ![](https://i.imgur.com/iXnL5Gt.png) 64 Samples ![](https://i.imgur.com/VNvfccV.png) 1024 Samples ![](https://i.imgur.com/kAZnZ9f.png) As we expect, increasing samples allows for cleaner images. ## Part 5 With indirect lighting, our renders now look more realistic. However, they are still noisy for lower sample rates. We want high sample rates for locaticons that need it (like shadows) and low sample rates for locations that converge quickly. This is what adaptive sampling does. We want to stop our sampling when we have a value that is within the 95% confidence interval around the mean of the samples $\mu$. Formally, we want the value $v$ of the pixel to be in between $$\mu - I < v < \mu + I$$ Where $$I = 1.96 \cdot \frac{\sigma}{\sqrt{n}}$$ where $\sigma$ is the standard deviation of the points sampled and $n$ is the number of samples. We then terminate the sampling if $$I \leq maxTolerance \cdot \mu$$ Where $maxTolerance$ is a choice, usually arround 0.05. We implement this in `raytrace_pixel` when looping over the samples. We check every `samplesPerBatch`, to not slow down the render too much. `s1` is a sum of the illuminations and `s2` is a sum of the illuminations squared. ```cpp! for (int i = 1; i <= num_samples; i++, num_sampled++) { if (i % samplesPerBatch == 0 && i > 1) { float mu = s1 / (float)i; float var = (s2 - ((s1 * s1) / i)) / ((float)i - 1); float I = 1.96 * sqrt(var / (float)i); if (I <= maxTolerance * mu){ break; } } auto sample_point = origin + gridSampler->get_sample(); auto ray = camera->generate_ray(sample_point.x / sampleBuffer.w, sample_point.y / sampleBuffer.h); ray.depth = max_ray_depth; Vector3D radiance_est = est_radiance_global_illumination(ray); float illum_est = radiance_est.illum(); s1 += illum_est; s2 += illum_est * illum_est; radiance += radiance_est; } ``` We also need to make sure to normalize the final of illumination by the number of samples actually sampled, not the total number of samples possible! ```cpp! sampleBuffer.update_pixel(radiance * (1.0 / num_sampled), x, y); sampleCountBuffer[x + y * sampleBuffer.w] = num_sampled; ``` ### Results These renders were rendered with 2048 samples, 1 light ray, 5 max depth, and maxtolerance of 0.05. Bunny Sample Rate ![](https://i.imgur.com/7MwUHd9.png) Bunny Rendered Result ![](https://i.imgur.com/YlGeL6e.png) Spheres Sample Rate ![](https://i.imgur.com/piW9Jrs.png) Spheres Rendered Result ![](https://i.imgur.com/Pl8FoNf.png) Notice in both the most samples are where the indirect lighting dominates. That being shadows and the ceiling. ## Collaboration Tianchen coded parts 1 thru 3.3 and Riley coded parts 3.4 thru 5. We worked together on debugging individual parts. This was the most interesting project so far since it does so many complex things that the results are often surprising. Link: https://cal-cs184-student.github.io/project-webpages-sp23-CardiacMangoes/proj3-1/index.html

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully