Wednesday, November 20, 2019

cst325 w4

This week, we learned about raytracing and it's similarities and differences from rasterization. We also learned about the CPU and GPUs role in rasterization. After, we learned about applying matrix transforms to geometry. Then we learned about how to manipulate render state and issue commands for 3D scenes in WebGL.

Polygon meshes are a collection of vertices, edges and faces that make an object. Triangles are combined with shared edges to make the object. The vertices are position, color, and texture coordinates. These meshes are used in the Rasterization pipeline because they are all triangles, are easy to calculate, and work well with hardware like CPUs and GPUs (heterogeneous computing)

The graphics pipeline is the model that describes the steps the system takes to make a 3D scene into a 2D one: Application, Geometry, Rasterization, and then to Screen. In Raytracing, the nesting order goes from pixel to object, and in Rasterization it is the opposite.

In the CPU, the application creates/loads geometry, then gives commands to the API what when and how to draw, it loads the triangle mesh data then issues a command to render. In the GPU, geometric transformations are applied to vertices and make primitives from them. Vertices are transformed from their locally defined space to the screen space, then they are assembled into triangles which receive values interpolated from the vertices. Then in Rasterization, the triangles are converted to pixels and then shaded; creating the final image. Each fragment finds the closest to the camera through the Z-value.

No comments:

Post a Comment

cst 499 week 8

This week, we finished writing the paper in order to do the best job possible even if it was a little bit late. Now that everything is done,...