diff --git a/hw3/images/image1.png b/hw3/images/image1.png new file mode 100644 index 00000000..d8945384 Binary files /dev/null and b/hw3/images/image1.png differ diff --git a/hw3/images/image2.png b/hw3/images/image2.png new file mode 100644 index 00000000..d33915c2 Binary files /dev/null and b/hw3/images/image2.png differ diff --git a/hw3/images/image3.png b/hw3/images/image3.png new file mode 100644 index 00000000..623de7bf Binary files /dev/null and b/hw3/images/image3.png differ diff --git a/hw3/images/image4.png b/hw3/images/image4.png new file mode 100644 index 00000000..7b632bcb Binary files /dev/null and b/hw3/images/image4.png differ diff --git a/hw3/images/image5.png b/hw3/images/image5.png new file mode 100644 index 00000000..3d011e2d Binary files /dev/null and b/hw3/images/image5.png differ diff --git a/hw3/images/image6.png b/hw3/images/image6.png new file mode 100644 index 00000000..906426a1 Binary files /dev/null and b/hw3/images/image6.png differ diff --git a/hw3/images/image7.png b/hw3/images/image7.png new file mode 100644 index 00000000..50ebdb7c Binary files /dev/null and b/hw3/images/image7.png differ diff --git a/hw3/images/image8.png b/hw3/images/image8.png new file mode 100644 index 00000000..d5c192ed Binary files /dev/null and b/hw3/images/image8.png differ diff --git a/hw3/index.html b/hw3/index.html index c30176bc..1a296ded 100644 --- a/hw3/index.html +++ b/hw3/index.html @@ -1,7 +1 @@ - - - - - Homework 3 index.html here - - \ No newline at end of file +

CS 184/284A: Computer Graphics and Imaging, Spring 2024

Homework 3: Pathtracer

Dylan Rosario

Overview

        Over the course of the assignment, I implemented a complex path tracing algorithm in an effort to render realistic pictures. The steps required to accomplish this include ray generation and scene intersection, a BVH structure designed to accelerate render times, and advanced illumination methods. Each step is crucial to produce a final image that appears realistic.

Part 1: Ray Generation and Scene Intersection

        The first task concerns ray generation, and the implementation is relatively straightforward. With an input of normalized (x,y) coordinates, those coordinates must be accurately mapped to the sensor plane. After mapping, a ray can be generated starting from the origin of the camera to the aforementioned point in the sensor plane.

        To create the triangle intersection algorithm, I implemented the Möller Trumbore Algorithm introduced in lecture. This utilizes the three vertices of the triangle and also the ray to calculate a parameter t along with barycentric coordinates. The barycentric coordinates allow for a simple determination of whether a point lies within the triangle, and parameter t defines the point of intersection. For sphere intersections, it is the same concept as triangles with the t parameter. However, the sphere allows for multiple ray intersections, and the smallest acceptable instance must be parsed.

Rendering Times (seconds): Two Spheres: 0.0636, Three Gems: 0.4376, Coil: 14.2982

Part 2: Bounding Volume Hierarchy

        The bounding volume hierarchy (BVH) aims to drastically reduce the time required to render images by creating a binary tree to organize the given primitives. The first step is to create the bounding box for the primitives which defines the spatial area where the primitives are placed. To increase efficiency, I calculated the best axis to partition over (x, y, or z) by finding which dimension contained the greatest amount of primitives. Then, a heuristic was chosen to determine how elements will be divided into the left and right nodes. Ultimately, I chose the midpoint of the axis as the heuristic for the splitting point. With the extent function and minimum functions, I have the range and the minimum point for an axis, so calculating the midpoint is simply the minimum plus range/2. Then, every primitive on the left side of the midpoint is placed into the left node, and all others go to the right node. Recursively running this algorithm with these nodes and stopping when the number of primitives is below a given limit, BVH will speed up the rendering process.

Rendering Times (seconds): Cow: 19.3411 (no BVH) & 15.3183 (BVH), Teapot: 8.0354 (no BVH) & 5.1314 (BVH), Max Planck: 210.8908 (BVH), Peter: 173.3340 BVH)

Part 3: Direct Illumination

        The image seen below is the output of the CBbunny after my attempt at implementing uniform hemisphere sampling with direct lighting only. While not correct, there is a slight outline of a bunny that can be seen indicating I was on the correct path. Likely, the bug is somewhere with how the light bounces are being calculated, but unfortunately I could not pinpoint the exact issue after hours of debugging.

        The two main functions here are being able to calculate uniform hemisphere sampling and also light importance sampling. Uniform hemisphere sampling uses a Monte Carlo integration to estimate the amount of incoming light into a hemisphere, and then calculates the outgoing light ray with the reflection equation from the amount of incoming light. Coming back to Monte Carlo, it will uniformly sample the incoming illumination around the point of interest, hit_p, and divide by the number of samples to get an estimate.

        Light importance sampling works by not considering bounces and thus will be simpler. To create this, I would only consider direction between all light sources and the hit_p in question. If there exists a direct path, sampling is done to determine incoming light and then an outgoing ray can be made with the reflection equation.

Part 4: Global Illumination

        Due to bugs with how light is being calculated in presumably the previous section, I could not generate images for the global illumination section. However, I will explain how I would plan to implement this. Each ray of light will bounce N times off of other objects. To estimate the global illumination rays, I would form the single bounce function, then recursively call upon itself until the max ray depth is reached. With recursion, I would need to redefine the previous outgoing ray as the new incoming ray for the next intersection. Otherwise, the implementation of ray tracing is fairly similar to direct illumination functions.

        Next, global illumination with russian roulette is implemented in an effort to reduce the computational power required to render images. Normally, rays stopped being recursed at a defined constant max ray depth, also known as number of bounces. With russian roulette, each ray will now stop being considered at a random length less than max ray depth. This may produce a slightly different image each time, but overall will not have a great effect on how the image looks. It is relatively simple to add this function, just a random number generating function that determines the probability of early termination.

Part 5: Adaptive Sampling

        Similar to the previous section, I will walk through how I would’ve coded the adaptive sampling section of this project. Adaptive sampling aims to work in areas where the Monte Carlo path tracing lacked. With the previous method, a very high amount of samples are taken per pixel. In real images, some sections are easy to render and other parts are more difficult depending on how many rays pass through a given pixel. With each sample that passes through a pixel, it has a given RGB value which defines color or illuminance. By summing these and keeping track of the number of samples that pass through a pixel, an adaptive sample version of an image can be created where the color at each pixel is a function of the weighted sum of RGB values and the number of samples present in the area. The end result is a similar looking image resembling an infrared version of the images produced earlier in this project.

\ No newline at end of file