{{FULL_COURSE}} Homework 3 - Rasterizer and Camera


0 Goal

To create a rasterizer for drawing scenes composed of 3D polygons. You will learn how to construct a perspective projection camera to view these 3D polygons.

1 Supplied Code

We will provide a snippet of code to supplement your scene file loader from the previous homework. Aside from this, you will be working with the code you wrote for the rasterizer last week.
Click here to download the code.

2 Help Log (5 points)

Maintain a log of all help you receive and resources you use. Make sure the date and time, the names of everyone you work with or get help from, and every URL you use, except as noted in the collaboration policy. Also briefly log your question, bug or the topic you were looking up/discussing. Ideally, you should also the answer to your question or solution to your bug. This will help you learn and provide a useful reference for future assignments and exams.

3 Conceptual Questions (15 points)

Before you begin the programming portion of this homework assignment, read and answer the following conceptual questions. Your answers should be written in your readme.txt file along with your documentation of your project.

4 Code Requirements (80 points)

You will be adding more code to your implementation of RenderScene(). It is important to note that the features listed below do not necessarily have to be completed in the order they are written.

4.1 Perspective Camera (25 points)

Create a Camera class that you will use to generate view matrices and perspective projection matrices. Your camera should contain the following member variables which will be assigned the following values by the camera's default constructor:

Additionally, your camera should implement the following functions:

4.2 Interactive camera (10 points)

In MainWindow's keyPressEvent function, add switch cases that call the movement functions you implemented for the Camera class. For example, when the user presses the W key, the camera could move along its local Z axis. The particulars of your control scheme are up to you, but please document them in your readme file. Note that your rasterizer may take a few moments to display the scene from the camera's new vantage point, especially if you are rendering a polygon mesh with many faces.

This keyPressEvent function will be called by Qt's QMainWindow class (from which our MainWindow inherits) whenever the user presses a key on his or her keyboard. If some of your keypresses seem to not be working, try clicking on the grey background of the main window (i.e. NOT the rendered image) to give your main window keyboard focus.

4.3 Texture mapping (15 points)

Each of the JSON files provided with this assignment specify an image file to be loaded as the texture for the OBJ file noted in the JSON. Using the UV coordinates stored in each Vertex of the polygon mesh loaded from the OBJ file, map the texture colors to the surface of the polygon mesh using barycentric interpolation.

If you wish to test and implement other features before this step, we recommend that you assign a default surface color to geometry to use in place of its texture colors, such as grey or bubblegum pink.

4.4 Perspective-correct interpolation (20 points)

In order to correctly compute the Z coordinate and UV coordinates of your pixel fragments, you must implement two modified methods of barycentric interpolation. Based on the methods described in the lecture slides, interpolate each fragment's Z with correct perspective distortion, then interpolate each fragment's UVs with correct perspective distortion.

4.5 Z Buffering (5 points)

Rather than using the Painter's Algorithm as you did in the previous assignment, you should sort the Z coordinates of your triangles on a per-fragment basis rather than a per-triangle basis. We recommend you create a WxH 2D array of floating point numbers in which to store the depth of the fragment that is currently being used to color a given pixel.

4.6 Lambertian Reflection (5 points)

Using the camera's look vector as the light direction in your scene, use the Lambertian reflection model to attenuate the brightness of each point on the surface of a mesh. We also recommend you add some amount of ambient light to your scene so that faces that are "completely dark" are still slightly lit and therefore visible.

5 Extra Credit (Maximum 30 points)

If you implement any extra credit features, please note them in your readme.txt file. If applicable, also explain how to activate or use your additional feature(s).

5.1 Normal Mapping (25 points)

Using the normal map images provided with this assignment, implement normal displacement on a per-fragment level. You'll have to add your own custom code to the JSON files we provide as well as MainWindow::on_actionLoad_Scene_triggered() and the Polygon class. Since this portion of the extra credit requires adding code to several files, it is worth more points than it would normally be. We suggest you plan on this taking a little extra time to implement due to this.

5.2 Blinn-Phong Reflection Model (10 points)

In addition to Lambertian reflection, implement the specular highlights of the Blinn-Phong reflection model as part of your surface lighting.

5.3 Parallel Processing (35 points)

Using the QThread class, enable your program to process multiple pixel rows at once per triangle.

7 Submission

Make sure your homework compiles in the Moore labs. We will deduct points from any homework that does not compile, and possibly give a 0 grade if the compile errors are numerous. After you have tested your program, zip the topmost folder of your project and submit the zip to the class web site.