OpenGL is by itself a large ~state machine~: a collection of variables that define how OpenGL should currently operate. The state of OpenGL is commonly referred to as the OpenGL context.
When working in OpenGL we will come across several state-changing functions that change the context and several state-using functions that perform some operations based on the current state of OpenGL.
Double buffer:
The front buffer contains the final output image that is shown at the screen, while all the rendering commands draw to the back buffer. As soon as all the rendering commands are finished we swap the back buffer to the front buffer so the image is instantly displayed to the user, removing all the aforementioned artefacts.
GLEW provides efficient run-time mechanisms for determining which OpenGL extensions are supported on the target platform. In a nutshell, it acts as an interface between the OpenGL drivers available on the graphics card and us. So we don’t have to rely on the operating system’s support of OpenGL, since the graphics card have a better support of the specification.
GLuint
is simply a cross-platform substitute for unsigned int
, just like GLint
is one for int
.
Vertex is a point in the geometry. It contains position, but it’s not the only thing about it. Can also contain the following attributes:
Data comes in the form of vertex attributes.
device coordinates - Device X and Y coordinates are mapped to the screen between -1 and 1.
Memory buffer stored in the GPU. VBOs are “buffers” of video memory – just a bunch of bytes containing any kind of binary data you want. You can upload 3D points, colors, your music collection, poems to your loved ones – the VBO doesn’t care, because it just copies a chunk of memory without asking what the memory contains. Have no idea whattypeof data they contain.
Types of buffers:
Usage pattern
GL_STATIC_DRAW
: The vertex data will be uploaded once and drawn many times (e.g. the world).GL_DYNAMIC_DRAW
: The vertex data will be created once, changed from time to time, but drawn many times more than that.GL_STREAM_DRAW
: The vertex data will be uploaded once and drawn once.Represents the vertex fetch stage of the OpenGL pipeline and is used to supply input to the vertex shader
VAOs are the link between the VBOs and the shader variables. VAOs describe what typeof data is contained within a VBO, and which shader variables the data should be sent to.
~Since only calls after binding a VAO stick to it, make sure that you’ve created and bound the VAO at the start of your program. Any vertex buffers and element buffers bound before it will be ignored.~
glVertexAttribPointer
Defines an array of generic vertex attribute data.
void glVertexAttribPointer( GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);
Parameters:
index
- Specifies the index of the generic vertex attribute to be modified, f.ex: position => 0, normals => 1, etc.size
- how many components associated with that vertex attribute, f.ex: x, y => 2type
normalized
normalises a value between 0.f
and 1.f
, f.ex: if we pass an int
for the color attribute which would be between 0
and 255
stride
- Specifies the byte offset between consecutive generic vertex attributespointer
- Specifies a offset of the first component of the first generic vertex attributeAlso links the VAO with the VBO that is currently bound.
The main purpose of a vertex shader is to transform points (x, y, and z coordinates) into different points. ~A vertex is a is just a point in a shape~.
Thevertex shaderis a small program running on your graphics card that processes every one of these input vertices individually. This is where the perspective transformation takes place, which projects vertices with a 3D world position onto your 2D screen!
The inputs to the fragment shader are somewhat unlike inputs to other shader stages, in that OpenGL interpolates* their values across the primitive that’s being rendered.
The pixels in the texture will be addressed usingtexture coordinatesduring drawing operations. These coordinates range from0.0to1.0where(0,0)is conventionally the bottom-left corner and(1,1)is the top-right corner of the texture image.
Using glUniform1i
we can actually assign a location value to the texture sampler so we can set multiple textures at once in a fragment shader. This location of a texture is more commonly known as a texture unit.
The first thing you’ll have to consider is how the texture should be sampled when a coordinate outside the range of0to1is given. OpenGL offers 4 ways of handling this:
GL_REPEAT
: The integer part of the coordinate will be ignored and a repeating pattern is formed.GL_MIRRORED_REPEAT
: The texture will also be repeated, but it will be mirrored when the integer part of the coordinate is odd.GL_CLAMP_TO_EDGE
: The coordinate will simply be clamped between 0 and 1.GL_CLAMP_TO_BORDER
: The coordinates that fall outside the range will be given a specified border color.Set the texture wrapping method to GL_CLAMP_TO_EDGE
whenever you use alpha textures.
Since texture coordinates are resolution independent, they won’t always match a pixel exactly. This happens when a texture image is stretched beyond its original size or when it’s sized down.
GL_NEAREST
: Returns the pixel that is closest to the coordinates.GL_LINEAR
: Returns the weighted average of the 4 pixels surrounding the given coordinates.GL_NEAREST_MIPMAP_NEAREST
,GL_LINEAR_MIPMAP_NEAREST
,GL_NEAREST_MIPMAP_LINEAR
,GL_LINEAR_MIPMAP_LINEAR
: Sample from mipmaps instead.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Scaling down
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Scaling up
A collection of texture images where each subsequent texture is twice as small compared to the previous one. ~Only for down sampling~.
GL_NEAREST_MIPMAP_NEAREST
: takes the nearest mipmap to match the pixel size and uses nearest neighbor interpolation for texture sampling.GL_LINEAR_MIPMAP_NEAREST
: takes the nearest mipmap level and samples using linear interpolation.GL_NEAREST_MIPMAP_LINEAR
: linearly interpolates between the two mipmaps that most closely match the size of a pixel and samples via nearest neighbor interpolation.GL_LINEAR_MIPMAP_LINEAR
: linearly interpolates between the two closest mipmaps and samples the texture via linear interpolation.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Imagine if we had a large room with thousands of objects, each with an attached texture. There will be objects far away that have the same high resolution texture attached as the objects close to the viewer. Since the objects are far away and probably only produce a few fragments, OpenGL has difficulties retrieving the right color value for its fragment from the high resolution texture, since it has to pick a texture color for a fragment that spans a large part of the texture. This will produce visible artifacts on small objects, not to mention the waste of memory to use high resolution textures on small objects.
To make blending work for multiple objects we have to draw the farthest object first and the closest object as last.
One way of sorting the transparent objects is to retrieve the distance of an object from the viewer's perspective.
Example:
// A map automatically sorts its values based on its keys,
// so once we've added all positions with their distance as the
// key they're automatically sorted on their distance value
std::map<float, glm::vec3> sorted;
for (unsigned int i = 0; i < windows.size(); i++)
{
float distance = glm::length(camera.Position - windows[i]);
sorted[distance] = windows[i];
}
// we take each of the map's values in reverse order (from farthest to nearest) and
// then draw the corresponding windows in correct order
for(std::map<float,glm::vec3>::reverse_iterator it = sorted.rbegin(); it != sorted.rend(); ++it)
{
model = glm::mat4(1.0f);
model = glm::translate(model, it->second);
shader.setMat4("model", model);
glDrawArrays(GL_TRIANGLES, 0, 6);
}
Before a triangle is processed further, it may be optionally passed through a stage called culling, which determines whether the triangle faces toward or away from the viewer and can decide whether to actually go ahead and draw it based on the result of this computation. If the triangle faces toward the viewer, then it is considered to be front-facing; otherwise, it is said to be back-facing.
glEnable(GL_CULL_FACE);
The glCullFace
function has three possible options:
GL_BACK
: Culls only the back faces.
GL_FRONT
: Culls only the front faces.
GL_FRONT_AND_BACK
: Culls both the front and back faces.
Aside from the faces to cull we can also tell OpenGL we'd rather prefer clockwise faces as the front-faces instead of counter-clockwise faces via glFrontFace
:
glFrontFace(GL_CCW);
There are three major classifications of extensions: vendor, EXT, and ARB.
Initials representing the specific vendor are usually part of the extension name —“AMD” for Advanced Micro Devices or “NV” for NVIDIA, for example.
ARB extensions are an official part of OpenGL because they are approved by the OpenGL governing body, the Architecture Review Board (ARB).
OpenGL expects every vertex coordinate to be in the range of -1 to 1 at the end of the Vertex Shader run. Everything outside of this range is clipped.
The orthographic frustum directly maps all coordinates inside the frustum to normalized device coordinates since the w
component of each vector is untouched; if the w component is equal to 1.0 perspective division doesn't change the coordinates.
glm::ortho(0.0f, 800.0f, 0.0f, 600.0f, 0.1f, 100.0f);
Each component of the vertex coordinate is divided by its w
component giving smaller vertex coordinates the further away a vertex is from the viewer.
out = ( x/w y/w z/w )
View space
To define a camera we need its position in world space, the direction it's looking at, a vector pointing to the right and a vector pointing upwards from the camera.
ZX
and Y
X
and Z
direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));
direction.y = sin(glm::radians(pitch));
direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));
Simulate zooming by changing the FOV. When the field of view becomes smaller the scene's projected space gets smaller giving the illusion of zooming in.
Cheap Global Illumination. We only need an ambient factor (ambientStrength
) in order to get a small effect from it.
float ambientStrength = 0.1;
vec3 ambient = ambientStrength * lightColor;
vec3 color = ambient * objectColor;
To calculate it:
Also the normals position needs to be transformed to world space by using mat3(transpose(inverse(model)))
. This fixes the issue of applying non-uniform scaling to the model.
Specular lighting is based on the light's direction vector and the object's normal vectors, but this time it is also based on the view direction e.g. from what direction the player is looking at the fragment.
To calculate it:
The closer the angle between them, the greater the impact of the specular light. The resulting effect is that we see a bit of a highlight when we're looking at the light's direction reflected via the object.
Getting the halfway vector is easy, we add the light's direction vector and view vector together and normalize the result:
vec3 lightDir = normalize(lightPos - FragPos);
vec3 viewDir = normalize(viewPos - FragPos);
vec3 halfwayDir = normalize(lightDir + viewDir);
float spec = pow(max(dot(normal, halfwayDir), 0.0), shininess);
vec3 specular = lightColor * spec;
Basically a texture, although in lighted scenes this is usually called a diffuse map (this is generally how 3D artists call them) since a texture image represents all of the object's diffuse colors.
When a light source is modeled to be infinitely far away it is called a directional light since all its light rays have the same direction; it is independent of the location of the light source.
Generally people prefer to specify a directional light as a global direction pointing from the light source. Therefore we have to negate the global light direction vector to switch its direction; it's now a direction vector pointing towards the light source.
vec3 lightDir = normalize(-light.direction);
A point light is a light source with a given position somewhere in a world that illuminates in all directions where the light rays fade out over distance.
To reduce the intensity of light, over the distance a light ray travels, is generally called attenuation.
Here d represents the distance from the fragment to the light source. Then to calculate the attenuation value we define 3 (configurable) terms: a constant term Kc, a linear term Kl and a quadratic term Kq.
A spotlight in OpenGL is represented by a world-space position, a direction and a cutoff angle that specifies the radius of the spotlight. For each fragment we calculate if the fragment is between the spotlight's cutoff directions (thus in its cone) and if so, we lit the fragment accordingly.
LightDir
: the vector pointing from the fragment to the light source.SpotDir
: the direction the spotlight is aiming at.ϕ
: the cutoff angle that specifies the spotlight's radius. Everything outside this angle is not lit by the spotlight.θ
: the angle between the LightDir vector and the SpotDir vector. The θ value should be smaller than the Φ value to be inside the spotlight.To create the effect of a smoothly-edged spotlight we want to simulate a spotlight having an inner and an outer cone.
Here ϵ (epsilon)
is the cosine difference between the inner (ϕ)
and the outer cone (γ)
I
value is then the intensity of the spotlight at the current fragment.
float cosTheta = max(dot(lightRay, spotLightRay), 0.0);
float cosEpsilon = spotLight.innerCutoff - spotLight.outerCutoff; // reversed coz cosine
float attenuation = clamp((cosTheta - spotLight.outerCutoff) / cosEpsilon, 0., 1.);
The depth-buffer is a buffer that, just like the color buffer (that stores all the fragment colors: the visual output), stores information per fragment and (usually) has the same width and height as the color buffer. The depth buffer is automatically created by the windowing system and stores its depth values as 16, 24 or 32 bit floats. In most systems you'll see a depth buffer with a precision of 24 bits.
Depth testing is done in screen space after the fragment shader has run (and stencil testing).
The screen space coordinates relate directly to the viewport defined by OpenGL's glViewport
function and can be accessed via GLSL's built-in gl_FragCoord
variable in the fragment shader.
gl_FragCoord.z
is the value that is compared to the depth buffer's content.glDepthFunc(GL_LESS);
The stencil buffer is first cleared with zeros and then an open rectangle of 1s is set in the stencil buffer. The fragments of the scene are then only rendered (the others are discarded) wherever the stencil value of that fragment contains a 1.
The routine for outlining your objects is as follows:
GL_ALWAYS
before drawing the (to be outlined) objects, updating the stencil buffer with 1s wherever the objects' fragments are rendered.By binding to the GL_FRAMEBUFFER
target all the next read and write framebuffer operations will affect the currently bound framebuffer.
For a framebuffer to be complete the following requirements have to be satisfied:
When creating an attachment we have two options to take: textures or renderbuffer objects.
To draw the scene to a single texture we'll have to take the following steps:
For this texture, we're only allocating memory and not actually filling it. Filling the texture will happen as soon as we render to the framebuffer.
It is also possible to attach both a depth buffer and a stencil buffer as a single texture. Each 32 bit value of the texture then consists for 24 bits of depth information and 8 bits of stencil information.
Example:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, 800, 600, 0,
GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL
);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, texture, 0);
If you want to render your whole screen to a texture of a smaller or larger size you need to call glViewport
again (before rendering to your framebuffer) with the new dimensions of your texture, otherwise only a small part of the texture or screen would be drawn onto the texture.
Renderbuffer objects are generally write-only, thus you cannot read from them (like with texture-access). It is possible to read from them via glReadPixels
though that returns a specified area of pixels from the currently bound framebuffer, but not directly from the attachment itself.
Since renderbuffer objects are generally write-only they are often used as depth and stencil attachments, since most of the time we don't really need to read values from the depth and stencil buffers but still care about depth and stencil testing. We need the depth and stencil values for testing, but don't need to sample these values so a renderbuffer object suits this perfectly. When we're not sampling from these buffers, a renderbuffer object is generally preferred since it's more optimized.
If we imagine we have a cube shape that we attach such a cubemap to, the direction vector to sample the cubemap would be similar to the (interpolated) vertex position of the cube. This way we can sample the cubemap using the cube's actual position vectors as long as the cube is centered on the origin.
The GL_TEXTURE_WRAP_R
, this simply sets the wrapping method for the texture's R coordinate which corresponds to the texture's 3rd dimension (like the z for positions).
Simply rendering it without depth testing is not a solution since the skybox will then overwrite all the other objects in the scene. We need to trick the depth buffer into believing that the skybox has the maximum depth value of 1.0
so that it fails the depth test wherever there's a different object in front of it.
In the coordinate systems tutorial we said that perspective division is performed after the vertex shader has run, dividing the gl_Position
's xyz coordinates by its w component. We also know from the depth testing tutorial that the z component of the resulting division is equal to that vertex's depth value. Using this information we can set the z component of the output position equal to its w component which will result in a z component that is always equal to 1.0, because when the perspective division is applied its z component translates to w / w = 1.0
.
The resulting normalized device coordinates will then always have a z value equal to 1.0: the maximum depth value. The skybox will as a result only be rendered wherever there are no objects visible (only then it will pass the depth test, everything else is in front of the skybox).
We do have to change the depth function a little by setting it to GL_LEQUAL
instead of the default GL_LESS
. The depth buffer will be filled with values of 1.0
for the skybox, so we need to make sure the skybox passes the depth tests with values less than or equal to the depth buffer instead of less than.
Using a cubemap with an environment, we could give objects reflective or refractive properties.
We calculate a reflection vector R¯
around the object's normal vector N¯
based on the view direction vector I¯
. We can calculate this reflection vector using GLSL's built-in reflect
function. The resulting vector R¯ is then used as a direction vector to index/sample the cubemap returning a color value of the environment. The resulting effect is that the object seems to reflect the skybox.
Refraction is the change in direction of light due to the change of the material the light flows through. Refraction is what we commonly see with water-like surfaces where the light doesn't enter straight through, but bends a little. It's like looking at your arm when it's halfway in the water.
Refraction is described by Snell's law that with environment maps looks a bit like this:
we have a view vector I¯
, a normal vector N¯
and this time a resulting refraction vector R¯
. As you can see, the direction of the view vector is slightly bend. This resulting bended vector R¯
is then used to sample from the cubemap.
Material | Refractive Index |
---|---|
Air | 1.00 |
Water | 1.33 |
Ice | 1.309 |
Glass | 1.52 |
Diamond | 2.42 |
Refraction can easily be implemented by using GLSL's built-in refract
function that expects a normal vector, a view direction and a ratio between both materials' refractive indices.
We use these refractive indices to calculate the ratio between both materials the light passes through. In our case, the light/view ray goes from air to glass (if we assume the container is made of glass) so the ratio becomes 1.00/1.52=0.658
.
The maximum amount of data allowed as a vertex attribute is equal to a vec4
. Because a mat4
is basically 4 vec4s
, we have to reserve 4 vertex attributes for this specific matrix.
SSAA - Super Sample Anti-Aliasing
MSAA - Multisample Anti-Aliasing
What multisampling does is not use a single sampling point for determining coverage of the triangle, but use multiple sample points (guess where it got its name from). Instead of a single sample point at the center of each pixel we're going to place 4 subsamples in a general pattern and use those to determine pixel coverage. This does mean that the size of the color buffer is also increased by the number of subsamples we're using per pixel.
Monitors have brightness equal to an exponential relationship of roughly 2.2 also known as the gamma of a monitor.
A gamma value of 2.2 is a default gamma value that roughly estimates the average gamma of most displays. The color space as a result of this gamma of 2.2 is called the sRGB color space.
There are two ways to apply gamma correction to your scenes:
glEnable(GL_FRAMEBUFFER_SRGB);
or
Introduce a post-processing stage in your render loop and apply gamma correction on the post-processed quad as a final step which you'd only have to do once.
void main()
{
// do super fancy lighting
[...]
// apply gamma correction
float gamma = 2.2;
FragColor.rgb = pow(fragColor.rgb, vec3(1.0/gamma));
}
Textures are usually created in the sRGB space. By using the GL_SRGB
or GL_SRGB_ALPHA
input format, OpenGL converts the image into linear space.
glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
Important side-note
Textures used for coloring objects like diffuse textures are almost always in sRGB space.
Textures used for retrieving lighting parameters like specular maps and normal maps are almost always in linear space so if you were to configure these as sRGB textures as well, the lighting will break down. Be careful in which textures you specify as sRGB.
In the real physical world, lighting attenuates closely inversely proportional to the squared distance from a light source.
float attenuation = 1.0 / (distance * distance);
However, when using this attenuation equation the attenuation effect is always way too strong, giving lights a small radius that didn't look physically right. Therefore the attenuation equation used in the lighting section is still the best option:
You probably remember from the depth testing tutorial that a value in the depth buffer corresponds to the depth of a fragment clamped to [0,1]
from the camera's point of view. We can sample the closest depth values as seen from the light's perspective. After all, the depth values show the first fragment visible from the light's perspective. We store all these depth values in a texture that we call a depth map or shadow map.
Right Image: T§ is a matrix transformation that transforms P into the Light's coordinate space.
Because the shadow map is limited by resolution, multiple fragments can sample the same value from the depth map when they're relatively far away from the light source. The image shows the floor where each tilted panel represents a single texel of the depth map. As you can see, several fragments sample the same depth sample.
Problem:
We can solve this issue with a small little hack called a shadow bias where we simply offset the depth of the surface (or the shadow map) by a small bias amount such that fragments are not incorrectly considered below the surface.
Solution:
Perspective projections are most often used with spotlights and point lights while orthographic projections are used for directional lights.
Percentage-closer filtering which is a term that hosts many different filtering functions that produce softer shadows, making them appear less blocky or hard.
The idea is to sample more than once from the depth map, each time with slightly different texture coordinates.
What every coder should know about gamma: a well written in-depth article by John Novak about gamma correction.
Shadow mapping consists of two passes:
Normal vectors in a normal map are expressed in tangent space where normals always point roughly in the positive z direction.
A TBN matrix where the letters depict a tangent, bitangent and normal vectors is a matrix for any type of surface so that we can properly align the tangent space's z direction to the surface's normal direction.
We need 3 vectors:
SEM uses special texture maps called “lit spheres” or “matcaps”. These are essentially spherical reflection maps, but with a more diffuse colour instead of a sharp image. The sphere map contains everything that is in front of the camera, which means that the texture contains the incoming light inciding in the surface facing the camera. That’s why it doesn’t work as a perfect environment map: it doesn’t rotate with the view because it’s missing part of the information. All we can emulate is an object rotating by itself in a still light and camera setup.
The only way to execute the compute shader is via the OpenGL glDispatchCompute
or glDispatchComputeIndirect
command.
The number of invocations of the compute shader is completely user defined. It is not tied in any way to the number of vertices or fragments being rendered.
1D
2D
The number of invocations of a compute shader is governed by the user-defined compute space. This space is divided into a number of work groups. Each work group is then broken down into a number of invocations. We think of this in terms of the global compute space (all shader invocations) and the local work group space (the invocations within a particular work group). The compute space can be defined as a one-, two-, or three-dimensional space.
The order of execution of the work groups and thereby the individual shader invocations is unspecified and the system can execute them in any order.
In general, for reasons of efficiency, it is best to only attempt communication within a work group.
There are limits to the total number of work groups and local shader invocations. These can be queried (via glGetInteger*
) using the GL_MAX_COMPUTE_WORK_GROUP_COUNT
, GL_MAX_COMPUTE_WORK_GROUP_SIZE
, and GL_MAX_COMPUTE_WORK_GROUP_INVOCATIONS
parameters.
glDispatchCompute( 4, 5, 1 );
4
: work groups in the X dimension5
: work groups in the Y dimension1
: two-dimensionl compute spaceThe number of local invocations within each work group is not specified on the OpenGL side. Instead, it is specified within the compute shader itself with a layout specifier.
layout (local_size_x = 3, local_size_y = 3) in;
gl_WorkGroupSize
: The number of invocations per work group in each dimension—the same as what is defined in the layout specifier.gl_NumWorkGroups
: The total number of work groups in each dimension.gl_WorkGroupID
: The index of the current work group for this shader invocation.gl_LocalInvocationID
: The index of the current invocation within the current work group.gl_GlobalInvocationID
: The index of the current invocation within the global compute space.Referring to the 1D space image (grey square):
gl_GlobalInvocationID = gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID
bool inTile(vec2 p, float tileSize) {
vec2 ptile = step(0.5, fract(0.5 * p / tileSize));
return ptile.x == ptile.y;
}
// Reflections flip a vector along a given axis.
// Specifically, the reflection of a point p along the axis n
vec3 reflectPoint(vec3 p, vec3 n) {
return p - 2.0 * dot(n, p) * n / dot(n, n);
}
Finally, GLSL comes with a collection of builtin functions for performing common mathematical operations. Here is an (incomplete) list of built in functions for operating on scalar datatypes, based on types:
Unit conversion: radians
, degrees
Trigonometry: sin
, cos
, tan
, asin
, acos
, atan
Calculus: exp
, log
, exp2
, log2
Algebra: pow
, sqrt
, inversesqrt
Rounding: floor
, ceil
, fract
, mod
, step
, smoothstep
Magnitude: abs
, sign
, min
, max
, clamp
Interpolation: mix
GLSL also supports component-wise comparison operations for vectors. These are implemented as routines that take a pair of vectors and return a bvec
whose entries correspond to the value of the predicate:
lessThan(a,b)
lessThanEqual(a,b)
greaterThan(a,b)
greaterThanEqual(a,b)
equal(a,b)
for example: a < b == lessThan(a, b)
Boolean operations
Boolean vectors also support the following special aggregate operations:
any(b)
returns true if any component of b is true, false otherwise
all(b)
returns true if all components of b are true, false otherwise
not(b)
negates the logical value of the components of b
In addition to the standard arithmetic functions, vectors also support several special geometric operations:
length(p)
returns the euclidean length of p
distance(a,b)
returns the euclidean distance between a and b
dot(a,b)
computes vector dot product of a and b
cross(a,b)
computes the cross product of two 3 vectors
normalize(a)
rescales a to unit length
faceforward(n, I, nr)
reorient a normal to point away from a surface
reflect(I, N)
- reflects a vector I along the axis N
refract(I, N, eta)
- applies a refractive transformation to I according to Snell's law
In WebGL this hyperplane is taken to be the solution set to w=1. To see how this works concretely, let us intersect a line generated by the vector [0.2, 0.3, 0, 0.1]
with this hyperplane. That is, find t so that the point t * [0.2, 0.3, 0, 0.1]
is in the w=1 hyperplane :
0.1 * t = 1
Solving for t
gives t=10
, so this line is identified with the 3D point [2, 3, 0]
.
More generally, in WebGL any vector [x,y,z,w]
, corresponds to the 3D point [x/w,y/w,z/w]
.
screenColumn = 0.5 * screenWidth * (gl_Position.x / gl_Position.w + 1.0)
screenRow = 0.5 * screenHeight * (1.0 - gl_Position.y / gl_Position.w)
A similar equation holds for the depth value of the vertex:
depth = gl_Position.z / gl_Position.w
The main motivation (and also the origin of the name "clip" coordinates) comes from the fact that this rule greatly simplifies the problem of testing if a given point is visible or not. Specifically, all of the drawable geometry is "clipped" against a viewable frustum which is define by 6 inequalities:
-w < x < w
-w < y < w
0 < z < w
The model-view-projection factorization
Many 3D graphical applications make use of 4 different coordinate systems:
The relationship between these coordinate systems is usually specified using 3 different transformations: