Tuesday, November 26, 2019

cst325 w5


This week, we worked on shaders, texturing, and transparency. We used texture mapping and filtering, applied images as textures to objects, understood methods and limitations of alpha blending, and applied alpha blending in WebGL.

Texturing is a process that takes a surface and modifies it at each location using an image, function or other data source. It makes 3D scenes look real.

In rendering, many buffers are used. A buffer is a part of memory that stores data per pixel like color and depth buffers.

The depth test using the Z-buffer compares the current fragment depth against the corresponding value in the depth buffer, if its less; it will be kept. Other tests are alpha, stencil, and scissor. The alpha test is used to remove colors where the opacity is not above a value. The stencil test is used to compare a fragment to another value in the buffer. The scissor test determines if a pixel is in a user defined rectangle, only keeping if it is inside.

Alpha blending uses an equation to determine if the new color overwrites the current buffer color and how much it contributes.

Wednesday, November 20, 2019

cst325 w4

This week, we learned about raytracing and it's similarities and differences from rasterization. We also learned about the CPU and GPUs role in rasterization. After, we learned about applying matrix transforms to geometry. Then we learned about how to manipulate render state and issue commands for 3D scenes in WebGL.

Polygon meshes are a collection of vertices, edges and faces that make an object. Triangles are combined with shared edges to make the object. The vertices are position, color, and texture coordinates. These meshes are used in the Rasterization pipeline because they are all triangles, are easy to calculate, and work well with hardware like CPUs and GPUs (heterogeneous computing)

The graphics pipeline is the model that describes the steps the system takes to make a 3D scene into a 2D one: Application, Geometry, Rasterization, and then to Screen. In Raytracing, the nesting order goes from pixel to object, and in Rasterization it is the opposite.

In the CPU, the application creates/loads geometry, then gives commands to the API what when and how to draw, it loads the triangle mesh data then issues a command to render. In the GPU, geometric transformations are applied to vertices and make primitives from them. Vertices are transformed from their locally defined space to the screen space, then they are assembled into triangles which receive values interpolated from the vertices. Then in Rasterization, the triangles are converted to pixels and then shaded; creating the final image. Each fragment finds the closest to the camera through the Z-value.

Tuesday, November 12, 2019

cst 325 w3


This week, we learned how vectors and matrices are connected. We also learned how matrices are used to manipulate space and how to solve problems with matrix operations. We also created and combined matrix transformations.

Unfortunately, I did not have very much time to work on the lab because I went out of town and only had my laptop which is horrible. In the future, I think I’ll only go out of town on holidays and everyone else is going to have to deal with the fact that I have school, which is a priority and doing construction work all weekend on a condo I have no equity in is not really my priority.

A matrix describes the relationship between 2 coordinate spaces. It is a rectangular grid of numbers arranged into rows and columns defined in rows first then columns. A matrix may have 1 row or 1 column and are called row vectors and column vectors. A transposed matrix is where the column and rows are flipped. A diagonal matrix is equal to its transpose matrix. A scalar is a regular number and you just multiply the scalar by every element in the matrix. When you multiply two matrices, the columns in A must be the same amount of rows in B, if not it is not defined.

Rotation is about a point in 2D or about the origin based on the angle value. Scale is used to make matrices larger or smaller b a factor of k. There is uniform and nonuniform scale. In uniform scale, it dilates about the origin and preserves angles and proportions, the lengths change by k units, areas change by k squared, and volumes by k cubed. Nonuniform scale has different scale factors. The absolute value of k is shorter when less than 0. When k is 0, it has orthographic projection. Is reflected when k is negative and scales (?) when k is positive. It scales along the x axis when applied about the perpendicular axis. The basis vectors are independently affected by scale vectors so one can make a big y and small x object. In reflection, it is flipped about a line 2D or plane 3D. Shear is a transformation that skews the coordinate space, stretching nonuniformly. The angles are not preserved, but the areas and volumes are preserved.

Tuesday, November 5, 2019

CST 325 W2

This week, we learned about raytracing. We wrote the code for the basic raytracer and used diffuse shading using the Lambert term. We then created shadows from light occlusion and used vector operations for image generation.

Raytracing uses a camera, image, object, view ray, and shadow ray. We trace the ray from the camera through each image block resulting in the following. The ray will hit the sphere, the plane, both, or none. When the ray hits the sphere and plane, only the closest is used. If an object is between the ray and the light, light is occluded and the object casts a shadow. Lambert's Law states that the intensity of the reflected light is related to it's orientation. The greater the angle of the cosine between the light ray and surface, the less intense light. If the cosine of the angle is 1, the surface is facing the light and is fully lit, if its 0, it points away and is not lit. If it's between 0 and 1, it depends on the intensity but is partially lit. When the ray hits the sphere, it will cast a shadow on the plane. If d1 is the distance from the eye to the object and d2 is the distance from the eye to the light; if d1<d2, there will be a ray intersection and a shadow. If d1>d2, there will be a ray intersection but no shadow. In other words, if it can see the light, the shadow is based on the angle, if it doesn't, it will be set to black.

cst 499 week 8

This week, we finished writing the paper in order to do the best job possible even if it was a little bit late. Now that everything is done,...