# Tutorial \/ Implementation \: Ray Tracing in One Weekend the Book Series > Referene\: > * Ray Tracing in One Weekend > https://raytracing.github.io/ *This is the study and development note for the above tutorial series. I will include parallel programming, GPGPU \(OpenCL\) if possible. The motive is to find an interesting project to have practice on parallel programming, concurrent programming and GPGPU programming.* # Ray Tracing in One Weekend *Apparently, I am too dumb to be able to finish this book in one weekend.* ## 1\. Overview > Github Repository > * Ray Tracing in One Weekend Book Series > https://github.com/RayTracing/raytracing.github.io/ ## 2\. Output an Image ### PPM Format ```cpp std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n"; ... std::cout << ir << ' ' << ig << ' ' << ib << '\n'; // for each pixel ``` The pixel start from the left of the first row to the right and following the next row. The channel order of each pixel is `rgb`. ### Compile \& Run Program Using the following make file on Linux system. ``` main: main.o @g++ -o main *.o -g -m64 -Wall -std=c++20 -march=native main.o: main.cpp @g++ -c -g -m64 main.cpp -Wall -std=c++20 -march=native ``` Output to a `.ppm` file. ```cpp main > image.ppm ``` To view the image, the following website is used. https://www.cs.rhodes.edu/welshc/COMP141_F16/ppmReader.html ![image](https://hackmd.io/_uploads/HJhyiGD1bx.png) ### Adding a Progress Indicator Notice that we are using standard output stream \(`std::cout`\) to output the image. Therefore, we need to use the logging output stream \(std::clog\) to show the progress. Notice that in the book, the output on `std::clog` will use `std::flush` to make things showing inplace. However, I cannot make it to work correctly. Therefore, I don't flush the output but simplely print every output in a new line. ## 3\. The `vec3` class **TODO\: Github Repo** ### Implementation Consideration **Naming** For overloading operators, I prefer using `lhs` and `rhs` for the parameters. Refer to the following code snippet. ```cpp inline vec3 operator+(const vec3 &lhs, const vec3 &rhs) { return vec3(lhs.e[0] + rhs.e[0], lhs.e[1] + rhs.e[1], lhs.e[2] + rhs.e[2]); } ``` **Using `float` Instead of `double`** Notice that in C++, a floating number with out any suffix is `double` and if we want a `float` we need to write `255.999f` with an `f` as suffix. ```cpp void write_color(std::ostream &out, const color &pixel_color) { auto r = pixel_color.x(); auto g = pixel_color.y(); auto b = pixel_color.z(); int rbyte = static_cast<int>(255.999f * r); int gbyte = static_cast<int>(255.999f * g); int bbyte = static_cast<int>(255.999f * b); out << rbyte << ' ' << gbyte << ' ' << bbyte << ' ' << '\n'; } ``` ## 4\. Rays, a Simple Camera and Background :::info :information_source: **Computer Graphics Basics** * 電腦圖學概論 \: Rendering - Photo realistic rendering https://hackmd.io/@nobodyprogramming/Hk15dvZwu * Computer Graphics I: Rendering – Rasterization Vs. Ray Tracing https://alu2019.home.blog/2020/11/05/computer-graphics-i-rendering-rasterization-vs-ra/ ::: ### The `ray` Class In reality, the things we see come from the light source and reflect on objects along the way till it reaches our eyes. Ray tracing is the similar thing but reverse by checking the ray from the eye to the light source. In this section, `P(t) = A + tb` is used to calculate the ray, where `P` is a 3D position along a line in 3D. `A` is the ray origin and `b` is the ray direction. The ray parameter `t` is a real number that move along the 3D line. ```cpp #pragma once #include "vec3.h" using point3 = vec3; class ray { public: ray() = default; ray(const point3& origin, const vec3& direction) : origin_(origin), direction_(direction) {} const point3& origin() const { return origin_; } const vec3& direction() const { return direction_; } point3 at(float t) const { return origin_ + t * direction_; } private: point3 origin_; vec3 direction_; }; ``` ### Sending Rays Into the Scene > At its core, a ray tracer sends rays through pixels and computes the color seen in the direction of those rays. The involved steps are > > 1\. Calculate the ray from the “eye” through the pixel, > 2\. Determine which objects the ray intersects, and > 3\. Compute a color for the closest intersection point. :::info :information_source: **Viewport** * Viewport wiki https://en.wikipedia.org/wiki/Viewport ::: > The viewport is a virtual rectangle in the 3D world that contains the grid of image pixel locations. If pixels are spaced the same distance horizontally as they are vertically, the viewport that bounds them will have the same aspect ratio as the rendered image. The distance between two adjacent pixels is called the pixel spacing, and **square pixels** is the standard. :::success :bulb: **Ideal Ratio** In the book, the author use 16\:9 as the ideal ratio. The actual ratio might not be the same as the ideal ratio since the `image_width` and `image_height` can only be an integer and will be rounded to one. ::: ![image](https://hackmd.io/_uploads/HJmnncyebl.png) The right-handed coordinates is applied with x-axis to the right, y-axis to the up and negative z-axis point to the viewing direction. However, this will definitely conflict with image coordinates. The image coordination is from top-left to bottom-right so the Y-axis is inverted. > Our pixel grid will be inset from the viewport edges by half the pixel-to-pixel distance. This way, our viewport area is evenly divided into width × height identical regions. Here's what our viewport and pixel grid look like: ![image](https://hackmd.io/_uploads/SkvS_qyebx.png) > Reference \: Book Figure 4 :::warning 我們的像素網格將從視窗邊緣向內凹陷一半像素間距。 *by Google Translation* ![image](https://hackmd.io/_uploads/B1Sos51xWx.png =100x100) ::: ```cpp color ray_color_gradient(const ray &r) { vec3 unit_direction = unit_vector(r.direction()); float a = 0.5f * (unit_direction.y() + 1.0f); return (1.0f - a) * color(1.0f, 1.0f, 1.0f) + a * color(0.5f, 0.7f, 1.0f); } ``` Notice that after scaling the ray direction to unit length, the `y` is `-1.0 < y < 1.0`, but we need `0.0 <= a <= 1.0`. Thus, `a = (y + 1.0) / 2`. To create a gradient, the following formula is used. `blendedValue=(1 - a) * startValue + a * endValue, and 0.0 <= a <= 1.0`. The above `startValue` is one color and `endValue` is the other. ![image](https://hackmd.io/_uploads/H1TcaFlgWx.png) ## 5\. Adding a Sphere This part is mostly on the math side of the program. Please refer to [the original article](https://raytracing.github.io/books/RayTracingInOneWeekend.html#addingasphere) for detail. ![image](https://hackmd.io/_uploads/rJWQRLfx-g.png) Notice that the above formula will have 0 to 2 real solutions and is depending on `b^2-4ac` part, which can be positive \(2 roots, 2 intersection\) , negative \(no intersection\) or zero \(1 root, 1 intersection\). ![image](https://hackmd.io/_uploads/HJW57vfxWe.png) Notice that the solution can be negative so the code doesn\'t distinguish between objects in front of the camera and objects behind the camera. ## 6\. Surface Normals and Multiple Objects ### Shading with Surface Normals A surface normal \(vector\) is a vector that is perpendicular to the surface at the point of intersection. Basically shows the direction of the surface of an object at a view ray that intersect with the object under observation. > The surface angle can be represented as a line protruding in a perpendicular direction from the surface, and this direction \(which is a vector\) relative to the surface is called a “surface normal”, or simply, a normal. > https://docs.unity3d.com/6000.2/Documentation/Manual/StandardShaderMaterialParameterNormalMapSurfaceNormals.html ![image](https://hackmd.io/_uploads/B1UUF2E-bx.png =500x) Before, we just check if a view ray intersect with the object, but now we need to caclulate the `t`, which is the hit point. The object is currently located directly in front of the camera so the closest hit point \(smallest `t`\) is the one that we want. > A common trick used for visualizing normals \(because it’s easy and somewhat intuitive to assume `n` is a unit length vector — so each component is between −1 and 1\) is to map each component to the interval from 0 to 1, and then map \(x,y,z\) to \(red,green,blue\). Notice that the above `n` is the `P - C` vector in the diagram. ```cpp color ray_color_surface_normal(const ray &r) { float t = hit_sphere_surface_normal(point3(0, 0, -1), 0.5f, r); if (t >= 0.0f) { vec3 N = unit_vector(r.at(t) - vec3(0, 0, -1)); // N = P - C return 0.5f * color(N.x() + 1, N.y() + 1, N.z() + 1); } // Background vec3 unit_direction = unit_vector(r.direction()); float a = 0.5f * (unit_direction.y() + 1.0f); return (1.0f - a) * color(1.0f, 1.0f, 1.0f) + a * color(0.5f, 0.7f, 1.0f); } ``` ![image](https://hackmd.io/_uploads/HyixLCLbbx.png) ### Simplifying the Ray-Sphere Intersection Code In the previous code, when we calculate `b`, we use `float b = -2.0 * dot(r.direction(), oc);`. We can notice that there is a `-2.0f` in it. If we assume a `h = b / -2`, we can simplified the formula into\: ![image](https://hackmd.io/_uploads/B1PjiA8b-e.png) ```cpp float hit_sphere_surface_normal_simplify(const point3 &center, float radius, const ray &r) { vec3 oc = center - r.origin(); float a = r.direction().length_squared(); float h = dot(r.direction(), oc); float c = oc.length_squared() - (radius * radius); float discriminant = (h * h) - (a * c); if (discriminant < 0.0f) { return -1.0f; } else { return ((h - std::sqrt(discriminant)) / a); } } ``` ### An Abstraction for Hittable Objects An abstraction class for anything a ray might hit `hittable` and a `hit_record`, which represent for an viewray. ```cpp // Find the "nearest" root that lies in the acceptable range. float root = (h - sqrtd) / a; if (root <= ray_tmin || ray_tmax <= root) { root = (h + sqrtd) / a; if (root <= ray_tmin || ray_tmax <= root) { return false; } } ``` The above code snippet is complicated but we can see that , if `(h - sqrtd) / a` not exist in the range, check for `(h + sqrtd) / a`. If it still falls, the root is not exist in the range. Notice that `(h - sqrtd) / a` is smallest and nearest possible root and then we check for `(h + sqrtd) / a`. ### Front Faces Versus Back Faces We need to know that which side of the surface that the ray is coming from. For example, a glass balls is required to render ray from inside or outside differently. ![image](https://hackmd.io/_uploads/HkkpuMuZ-g.png) > If we decide to have the normals always point out, then we will need to determine which side the ray is on when we color it. We can figure this out by comparing the ray with the normal. **If the ray and the normal face in the same direction, the ray is inside the object, if the ray and the normal face in the opposite direction, then the ray is outside the object.** This can be determined by **taking the dot product of the two vectors, where if their dot is positive, the ray is inside the sphere**. ```cpp /** * @brief Sets the hit record normal vector. * @note NOTE: the parameter outward_normal is assumed to have unit length. */ void set_face_normal(const ray &r, const vec3 &outward_normal) { front_face = dot(r.direction(), outward_normal) < 0.0f; normal = front_face ? outward_normal : -outward_normal; } ``` ### A List of Hittable Objects ```cpp bool hit(const ray &r, float ray_tmin, float ray_tmax, hit_record &rec) const override { hit_record temp_rec; bool hit_anything = false; float closest_so_far = ray_tmax; for (const auto &object : objects) { if (object->hit(r, ray_tmin, closest_so_far, temp_rec)) { hit_anything = true; closest_so_far = temp_rec.t; rec = temp_rec; } } return hit_anything; } ``` The `hittable_list` class inherit `hittable` class with the `hit()` function as the above. Notice that this function check through all the object in the `objects` vector to see if anything hit and record the closest `hit_record`. ### Some New C++ Features \& Common Constants and Utility Functions *These sections are basically code restructure.* * **`interval.h`** ```cpp /** * @note Usually I will name this header file as main.h */ #pragma once #include <cmath> #include <iostream> #include <limits> #include <memory> using std::make_shared; using std::shared_ptr; const float infinity = std::numeric_limits<float>::infinity(); const float pi = 3.1415926535897932385; inline float degrees_to_radian(float degrees) { return degrees * pi / 180.0f; } // Common Header #include "color.h" #include "interval.h" #include "ray.h" #include "vec3.h" ``` In the above code snippet, the common headers should be put at the end of the file since `interval.h` use `const float infinity` in it and is required to be declared before the `interval.h` which also includes `rtweekend.h` ```cpp /** * @brief function can be sent with a `hittable_list` or a sigle `hittable` object */ color ray_color(const ray &r, const hittable &world) { hit_record rec; if (world.hit(r, 0, infinity, rec)) { return 0.5f * (rec.normal + color(1.0f, 1.0f, 1.0f)); } vec3 unit_direction = unit_vector(r.direction()); float a = 0.5f * (unit_direction.y() + 1.0f); return (1.0f - a) * color(1.0f, 1.0f, 1.0f) + a * color(0.5f, 0.7f, 1.0f); } ``` For now 12/02/2025, I will follow the authors structure; however, I will not personally use this code structure and this implementation will be changed in the final version. :::info :information_source: **Implementation Detail** ```cpp class interval { public: float min, max; interval() : min(+infinity), max(-infinity) {} // Default interval is empty interval(float min, float max) : min(min), max(max) {} float size() const { return max - min; } bool contains(float x) const { return min <= x && x <= max; } bool surrounds(float x) const { return min < x && x < max; } static const interval empty, universe; // if not static not allowed. }; const interval interval::empty = interval(+infinity, -infinity); const interval interval::universe = interval(-infinity, +infinity); ``` Notice that the `static const interval empty, universe` is a nested class. Usually, using object of class itself in a class is not allowed since this will make the object with unlimited size. \(infinite loop\). ::: ## 7\. Moving Camera Code Into Its Own Class **Camera Class** * Construct and dispatch rays into the world. * Use the results of these rays to construct the rendered image. Basically, moving camera related code in `main.cpp` to the `camera.h`. ## 8\. Antialiasing :::success :bulb: **Prerequisite** * What is Anti-aliasing? https://www.geeksforgeeks.org/computer-graphics/antialiasing/ * 反鋸齒 Wiki https://zh.wikipedia.org/zh-tw/%E5%8F%8D%E9%8B%B8%E9%BD%92 ::: Currently we have only one single ray through the center of each pixel, which is also known as point sampling. The author provide a good example to visualize the problem. ![image](https://hackmd.io/_uploads/SkttpTCZWx.png) Similar to the illustration above, if we sample 4 rays in a 8x8 grid, we can only have four sample of the grid which can have big bias even it is close to evenly sample of the grid, it still cannot show anything other than black or white. On the other hand, human eye will blend the nearby color into grey. > We'll adopt the simplest model: sampling the square region centered at the pixel that extends halfway to each of the four neighboring pixels. ### Generating Pixels with Multiple Samples Since we are going to add multiple samples together and average out, we need to make sure that the value is within bound \[0, 1\]. ```cpp static const interval intensity(0.000f, 0.999f); int rbyte = static_cast<int>(256.0f * intensity.clamp(r)); int gbyte = static_cast<int>(256.0f * intensity.clamp(g)); int bbyte = static_cast<int>(256.0f * intensity.clamp(b)); ``` ![image](https://hackmd.io/_uploads/H1RjP31zZl.png) ```cpp /** * @brief construct a camera ray originating from the origin and directed at randomly sample point around the pixel location i, j */ ray get_ray(int i, int j) const { vec3 offset = sample_square(); point3 pixel_sample = pixel00_loc + ((i + offset.x()) * pixel_delta_u) + ((j + offset.y()) * pixel_delta_v); point3 ray_origin = camera_center; vec3 ray_direction = pixel_sample - ray_origin; return ray(ray_origin, ray_direction); } ``` Apparently, this antialiasing implementation will make the program run significantly slow since every pixel need to do multiple samples. ```cpp for (int j = 0; j < image_height; j++) { std::clog << "\rScanlines remaining: " << (image_height - j) << std::endl; for (int i = 0; i < image_width; i++) { color pixel_color(0.0f, 0.0f, 0.0f); for (int s = 0; s < samples_per_pixel; s++) { ray r = get_ray(i, j); pixel_color += ray_color(r, world); } write_color(std::cout, pixel_samples_scale * pixel_color); } } ``` ## 9\. Diffuse Materials :::success :bulb: **Prerequisite** * 漫反射 Wiki https://zh.wikipedia.org/wiki/%E6%BC%AB%E5%8F%8D%E5%B0%84 ::: ### A Simple Diffuse Material > Let's start with the most intuitive: a surface that randomly bounces a ray equally in all directions. For this material, a ray that hits the surface has an equal probability of bouncing in any direction away from the surface. We need to generate random vectors on the hemisphere with **rejection method**. 1. Generate a random vector inside the unit sphere where `x`, `y`, `z` are all in `[-1, +1]` 2. Normalize this vector to extend it to the sphere surface 3. Invert the normalized vector if it falls onto the wrong hemisphere. This can be achieved by checking if the dot product of the random vector and the surface normal is positive. If the dot product is negative, then we need to invert the vector. ![image](https://hackmd.io/_uploads/BkgAdc_-f-e.png) ```cpp color ray_color(const ray &r, const hittable &world) const { hit_record rec; if (world.hit(r, interval(0, infinity), rec)) { vec3 direction = random_on_hemisphere(rec.normal); return 0.5f * ray_color(ray(rec.p, direction), world); } vec3 unit_direction = unit_vector(r.direction()); float a = 0.5f * (unit_direction.y() + 1.0f); return (1.0f - a) * color(1.0f, 1.0f, 1.0f) + a * color(0.5f, 0.7f, 1.0f); } ``` As the code above, when the viewray hit the object, there will be a random direction ray generated on the object surface. ### Limiting the Number of Child Rays As the previous section showed, the `ray_color` is called recursively. The recursion will stop when the ray not hitting anything. This might cause a stack overflow. ![image](https://hackmd.io/_uploads/rk1dzywGWl.png) :::warning There are some artifact of the final image, which I think is the overflow or the precision issue of the `float`, since `double` is used by the original book. *TODO\: I will dig into it later since I am seeing this project as a practice of GPGPU programming.* ::: ### Fixing Shadow Acne Floating point precision error will cause the calculated origin of a bouncing ray just above or below the surface. If the ray origin is just below the surface, there will be another hit that should not exist. Therefore, we need to have a tolerance for the hits that are very close to the intersection point. ![image](https://hackmd.io/_uploads/SydR1g_MZg.png) ```cpp hit_record rec; if (world.hit(r, interval(0.001f, infinity), rec)) { // recursion stop point vec3 direction = random_on_hemisphere(rec.normal); return 0.5f * ray_color(ray(rec.p, direction), (depth - 1), world); } ``` In the above code, the tolerance is set to `0.001f`. Notice that `float32` might introduce more precision error than `doulbe`. ### True Lambertian Reflection :::info :information_source: **Prerequisite** * Lambertian reflectance Wiki https://en.wikipedia.org/wiki/Lambertian_reflectance * Lambertian 反射(也叫理想散射) https://blog.csdn.net/u010922186/article/details/40680913 ::: Refer to the [original book](https://raytracing.github.io/books/RayTracingInOneWeekend.html#diffusematerials/truelambertianreflection) for detail. ![image](https://hackmd.io/_uploads/Sk7orWufbx.png) ### Using Gamma Correction for Accurate Color Intensity Currently, we default the spheres reflect only 50% of the energy. We can set it to different value. :::success :bulb: **Gamma、Linear、sRGB 和Unity Color Space,你真懂了嗎?** https://zhuanlan.zhihu.com/p/66558476 ::: Transfer image from linear space to gamma space. In this implementation, gamma 2 is used which means that we need to have an exponent of `1 / gamma`. We can apply `std::sqrt`, since `gamma` is `2`. **50% reflector with gamma correction.** ![image](https://hackmd.io/_uploads/HksRr9uzWe.png) ## 10\. Metal ### An Abstract Class for Materials :::info :information_source: **Implementation Consideration** To sort all the possible material into one related data structure, we can have a base object `material` for all the different materials to expand from. ::: ### A Data Structure to Describe Ray-Object Intersections In `hittable.h`, we refer type `material` in `material.h` and in the function `scatter()`, type `hit_record` is used. This means that we have a circularity of the references. We can simply add a `class material;` in front of the `class hit_record` without any implementation to tell the compiler the `material` class will be defined later because `material` is used only for pointer\(size is fixed for whatever type\). ### Modeling Light Scatter and Reflectance :::info :information_source: **Prerequisite** albedo \([反照率或反射係數](https://zh.wikipedia.org/zh-tw/%E5%8F%8D%E7%85%A7%E7%8E%87)\) attenuation \([衰減](https://zh.wikipedia.org/zh-tw/%E8%A1%B0%E5%87%8F)\) ::: > Lambertian \(diffuse\) reflectance can either always scatter and attenuate light according to its reflectance `R`, or it can sometimes scatter \(with probability `1−R`\) with no attenuation (where a ray that isn't scattered is just absorbed into the material). It could also be a mixture of both those strategies. We will choose to always scatter ### Mirrored Light Reflection ![image](https://hackmd.io/_uploads/Bk4Ga6hzbx.png) The `n` as the surface normal, is a unit vector. To get the vector `b`, we need to scale `n` with the length of `v`\'s projection on `n`, which is given by the dot product of `v` and `n`. The reflect direction can be created by the following function. ```cpp inline vec3 reflect(const vec3 &v, const vec3 &n) { return v - (2 * dot(v, n) * n); } ``` The implementation is basically move the code from `camera.h` into the `material` dirived classes and use `rec.mat` pointer that point to the material to create the needed `ray` ```cpp if (world.hit(r, interval(0.001f, infinity), rec)) { // recursion stop point ray scattered; color attenuation; if (rec.mat->scatter(r, rec, attenuation, scattered)) { return attenuation * ray_color(scattered, (depth - 1), world); } return color(0.0f, 0.0f, 0.0f); } ``` ![image](https://hackmd.io/_uploads/rk6bQRhGWl.png) ### Fuzzy Reflection ![image](https://hackmd.io/_uploads/Sy1FGBAGWx.png) > Also note that in order for the fuzz sphere to make sense, it needs to be **consistently scaled compared to the reflection vector**, which can vary in length arbitrarily. **To address this, we need to normalize the reflected ray**. > The bigger the fuzz sphere, the fuzzier the reflections will be. This suggests adding a fuzziness parameter that is just the radius of the sphere (so zero is no perturbation). **The catch is that for big spheres or grazing rays, we may scatter below the surface. We can just have the surface absorb those**. ![image](https://hackmd.io/_uploads/H1kgSBCfZl.png) ## 11\. Dielectrics Water, glass or diamond are dielectrics and when the light ray hits them, it will splits into a reflected ray and a refracted ray. :::success > We’ll handle that by randomly choosing between reflection and refraction, only generating one scattered ray per interaction. ::: :::info :information_source: **Refractive Index \(折射率\)** Describe how much light bends when entering a material from a vacuum. > When a transparent material is embedded in a different transparent material, you can describe the refraction with **a** relative refraction index\: **the refractive index of the object's material divided by the refractive index of the surrounding material.** For example, if you want to render a glass ball under water, then the glass ball would have an effective refractive index of 1.125. This is given by the refractive index of glass (1.5) divided by the refractive index of water (1.333). Ref\: https://raytracing.github.io/books/RayTracingInOneWeekend.html#dielectrics ::: ### Snell's Law :::info :information_source: **Snell's Law** https://zh.wikipedia.org/zh-tw/%E6%96%AF%E6%B6%85%E5%B0%94%E5%AE%9A%E5%BE%8B ::: ![image](https://hackmd.io/_uploads/r16BzBz7We.png =x250) ![image](https://hackmd.io/_uploads/ByhmEBzmbl.png =x150) ![image](https://hackmd.io/_uploads/ryljBrzQWe.png =x100) We assume that the `a` and `b` are unit vectors. ![image](https://hackmd.io/_uploads/ryzFYHfQ-e.png =x50) ```cpp vec3 unit_direction = unit_vector(r_in.direction()); vec3 refracted = refract(unit_direction, rec.normal, ri); ``` Notice that in the refract function, the `r_in.direction` and `rec.normal` should all be unit vectors as describe in the above formula as `a` and `b`. ![image](https://hackmd.io/_uploads/ByNTonz7-g.png) ### Total Internal Reflection According to Snell's law, if a light hit the surface with a big enough angle, it can refract with an angle **greater than 90 degree \(sin\(x\) > 1\)**. ![image](https://hackmd.io/_uploads/ByD6SxSQWl.png) Reference\: https://www.allaboutcircuits.com/video-tutorials/characteristics-of-sinusoidal-signals/ ![image](https://hackmd.io/_uploads/H1j7_gSmZl.png =x150) ![image](https://hackmd.io/_uploads/ByHF6eSm-e.png =x50) ```cpp bool scatter(const ray &r_in, const hit_record &rec, color &attenuation, ray &scattered) const override { attenuation = color(1.0f, 1.0f, 1.0f); float ri = rec.front_face ? (1.0f / refraction_index) : refraction_index; vec3 unit_direction = unit_vector(r_in.direction()); float cos_theta = std::fmin(dot(-unit_direction, rec.normal), 1.0f); float sin_theta = std::sqrt(1.0f - (cos_theta * cos_theta)); vec3 direction; if (ri * sin_theta > 1.0f) { direction = reflect(unit_direction, rec.normal); } else { direction = refract(unit_direction, rec.normal, ri); } scattered = ray(rec.p, direction); return true; } ``` :::success > Notice that the attenuation is always 1 because glass surface absorbs nothing ::: :::info :information_source: **refraction index** Reference\: * 全反射 Wiki https://zh.wikipedia.org/zh-tw/%E5%85%A8%E5%8F%8D%E5%B0%84 > Well, it turns out that given a sphere of material with an index of refraction greater than air, there's no incident angle that will yield total internal reflection — neither at the ray-sphere entrance point nor at the ray exit. This is due to the geometry of spheres, as a grazing incoming ray will always be bent to a smaller angle, and then bent back to the original angle on exit. ::: **Implementation Consideration** > We'll **model a world filled with water** (index of refraction approximately 1.33), and change the sphere material to air (index of refraction 1.00) — an **air bubble**! To do this, change the left sphere material's index of refraction to `index of refraction of air / index of refraction of water`. ![image](https://hackmd.io/_uploads/ByONocrQbg.png) ### Modeling a Hollow Glass Sphere > The `refraction_index` parameter to the dielectric material can be interpreted as the ratio of the refractive index of the object\(air\) divided by the refractive index of the enclosing medium\(glass\). ![image](https://hackmd.io/_uploads/rklTk4Imbx.png) ## 12\. Positionable Camera :::success :bulb: **Prerequisite** * FoV https://zh.wikipedia.org/wiki/%E8%A6%96%E9%87%8E > This is the visual angle from edge to edge of the rendered image. ::: ### Camera Viewing Geometry ![image](https://hackmd.io/_uploads/H1KIR_LQbl.png) ```cpp float focal_length = 1.0f; float theta = degrees_to_radian(vfov); float h = std::tan(theta / 2); float viewport_height = 2 * h * focal_length; ``` ![image](https://hackmd.io/_uploads/SyXnlYL7-g.png) ### Positioning and Orienting the Camera ![image](https://hackmd.io/_uploads/SkR9aKIXWe.png) The camera frame can have its own direction vectors `v`, `u`, `w`, but we need a vector \(view up vector\) that is the **up**\(`vup`\) for the world. In the code, the **up** for the world can simply be `(0, 1, 0)` and that we have two vectors `vup` and `w` \(opposite of lookfrom to lookat vector\) that is known to us. With these two vectors, we can calculate and deduce the `u` and `v`. ```cpp // caculate the u, v, w unit basis vectors for the camera coordinate frame w = unit_vector(lookfrom - lookat); // opposite of the vector lookfrom point to lookat u = unit_vector(cross(vup, w)); v = cross(w, u); ``` ![image](https://hackmd.io/_uploads/ByYAdqLQ-x.png) ## 13\. Defocus Blur :::info :information_source: **Prerequisite** * 景深 Wiki https://zh.wikipedia.org/zh-tw/%E6%99%AF%E6%B7%B1 ::: :::success https://raytracing.github.io/books/RayTracingInOneWeekend.html#defocusblur ::: ### A Thin Lens Approximation :::success > * The focus plane is orthogonal to the camera view direction. > * The focus distance is the distance between the camera center and the focus plane. > * The viewport lies on the focus plane, centered on the camera view direction vector. > * The grid of pixel locations lies inside the viewport (located in the 3D world). > * Random image sample locations are chosen from the region around the current pixel location. > * The camera fires rays from random points on the lens through the current image sample location. ::: ### Generating Sample Rays ![image](https://hackmd.io/_uploads/SyhOZ3FXWg.png) A defocused disk is used and random points will be choosed as the `ray_origin` to create the defocus effect. Additionally, the `focal_length` now is `focus_dist`. ```cpp inline vec3 random_in_unit_disk() { while(true) { vec3 p = vec3(random_float(-1.0f, 1.0f), random_float(-1.0f, 1.0f), 0); if (p.length_squared() < 1.0f) { return p; } } } ``` ```cpp /** * @brief returns a random point in the camera defocus disk. */ point3 defocus_disk_sample() const { vec3 p = random_in_unit_disk(); return camera_center + (p[0] * defocus_disk_u) + (p[1] * defocus_disk_v); } ``` ![image](https://hackmd.io/_uploads/Byh6XehXWe.png) ## 14\. A Final Render ![圖片1](https://hackmd.io/_uploads/SJJzISAmWx.png) ``` Done. real 1197m23.513s user 1197m18.844s ``` # Part I Parallelize with OpenCL ## TODOs * Review and consider parallel and make comments * `ray_color()` replace with loop implementation * Migrate implementation to multiple `cpp` files * Merge functions * check structure into OpenCL