forked from cal-cs184/hw-webpage-template
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
a119fbc
commit ba4e795
Showing
9 changed files
with
1 addition
and
7 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1 @@ | ||
<html> | ||
<head> | ||
</head> | ||
<body> | ||
Homework 3 index.html here | ||
</body> | ||
</html> | ||
<html><head><meta content="text/html; charset=UTF-8" http-equiv="content-type"><style type="text/css">ul.lst-kix_u1e16bq4tfk-7{list-style-type:none}ul.lst-kix_u1e16bq4tfk-8{list-style-type:none}.lst-kix_u1e16bq4tfk-3>li:before{content:"\0025cf "}.lst-kix_u1e16bq4tfk-2>li:before{content:"\0025a0 "}.lst-kix_u1e16bq4tfk-4>li:before{content:"\0025cb "}.lst-kix_u1e16bq4tfk-1>li:before{content:"\0025cb "}.lst-kix_u1e16bq4tfk-5>li:before{content:"\0025a0 "}ul.lst-kix_u1e16bq4tfk-5{list-style-type:none}ul.lst-kix_u1e16bq4tfk-6{list-style-type:none}ul.lst-kix_u1e16bq4tfk-3{list-style-type:none}.lst-kix_u1e16bq4tfk-0>li:before{content:"\0025cf "}ul.lst-kix_u1e16bq4tfk-4{list-style-type:none}.lst-kix_u1e16bq4tfk-6>li:before{content:"\0025cf "}.lst-kix_u1e16bq4tfk-8>li:before{content:"\0025a0 "}ul.lst-kix_u1e16bq4tfk-1{list-style-type:none}ul.lst-kix_u1e16bq4tfk-2{list-style-type:none}ul.lst-kix_u1e16bq4tfk-0{list-style-type:none}.lst-kix_u1e16bq4tfk-7>li:before{content:"\0025cb "}ol{margin:0;padding:0}table td,table th{padding:0}.c0{color:#000000;font-weight:400;text-decoration:none;vertical-align:baseline;font-size:12pt;font-family:"Times New Roman";font-style:normal}.c1{padding-top:14pt;padding-bottom:0pt;line-height:2.0;orphans:2;widows:2;text-align:center}.c2{padding-top:14pt;padding-bottom:0pt;line-height:2.0;orphans:2;widows:2;text-align:left}.c7{color:#000000;text-decoration:none;vertical-align:baseline;font-style:normal}.c3{font-size:12pt;font-family:"Times New Roman";font-weight:400}.c5{font-size:12pt;font-family:"Times New Roman";font-weight:700}.c6{background-color:#ffffff;max-width:468pt;padding:72pt 72pt 72pt 72pt}.c4{font-style:italic}.title{padding-top:0pt;color:#000000;font-size:26pt;padding-bottom:3pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}.subtitle{padding-top:0pt;color:#666666;font-size:15pt;padding-bottom:16pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}li{color:#000000;font-size:11pt;font-family:"Arial"}p{margin:0;color:#000000;font-size:11pt;font-family:"Arial"}h1{padding-top:20pt;color:#000000;font-size:20pt;padding-bottom:6pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h2{padding-top:18pt;color:#000000;font-size:16pt;padding-bottom:6pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h3{padding-top:16pt;color:#434343;font-size:14pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h4{padding-top:14pt;color:#666666;font-size:12pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h5{padding-top:12pt;color:#666666;font-size:11pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h6{padding-top:12pt;color:#666666;font-size:11pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;font-style:italic;orphans:2;widows:2;text-align:left}</style></head><body class="c6 doc-content"><p class="c1"><span class="c7 c5">CS 184/284A: Computer Graphics and Imaging, Spring 2024</span></p><p class="c1"><span class="c5 c7">Homework 3: Pathtracer</span></p><p class="c1"><span class="c5">Dylan Rosario</span></p><p class="c1"><span class="c0">Overview</span></p><p class="c2"><span class="c0"> Over the course of the assignment, I implemented a complex path tracing algorithm in an effort to render realistic pictures. The steps required to accomplish this include ray generation and scene intersection, a BVH structure designed to accelerate render times, and advanced illumination methods. Each step is crucial to produce a final image that appears realistic.</span></p><p class="c1"><span class="c0">Part 1: Ray Generation and Scene Intersection</span></p><p class="c2"><span class="c0"> The first task concerns ray generation, and the implementation is relatively straightforward. With an input of normalized (x,y) coordinates, those coordinates must be accurately mapped to the sensor plane. After mapping, a ray can be generated starting from the origin of the camera to the aforementioned point in the sensor plane.</span></p><p class="c2"><span class="c3"> To create the triangle intersection algorithm, I implemented the Möller Trumbore Algorithm introduced in lecture. This utilizes the three vertices of the triangle and also the ray to calculate a parameter </span><span class="c3 c4">t</span><span class="c3"> along with barycentric coordinates. The barycentric coordinates allow for a simple determination of whether a point lies within the triangle, and parameter </span><span class="c3 c4">t</span><span class="c3"> defines the point of intersection. For sphere intersections, it is the same concept as triangles with the </span><span class="c3 c4">t</span><span class="c0"> parameter. However, the sphere allows for multiple ray intersections, and the smallest acceptable instance must be parsed.</span></p><p class="c1"><span class="c3">Rendering Times (seconds): Two Spheres: 0.0636, Three Gems: 0.4376, Coil: 14.2982</span><span style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 307.20px; height: 230.40px;"><img alt="" src="images/image3.png" style="width: 307.20px; height: 230.40px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);" title=""></span><span style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 307.20px; height: 230.40px;"><img alt="" src="images/image6.png" style="width: 307.20px; height: 230.40px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);" title=""></span></p><p class="c1"><span style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 307.20px; height: 229.93px;"><img alt="" src="images/image1.png" style="width: 307.20px; height: 229.93px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);" title=""></span></p><p class="c1"><span class="c0">Part 2: Bounding Volume Hierarchy</span></p><p class="c2"><span class="c0"> The bounding volume hierarchy (BVH) aims to drastically reduce the time required to render images by creating a binary tree to organize the given primitives. The first step is to create the bounding box for the primitives which defines the spatial area where the primitives are placed. To increase efficiency, I calculated the best axis to partition over (x, y, or z) by finding which dimension contained the greatest amount of primitives. Then, a heuristic was chosen to determine how elements will be divided into the left and right nodes. Ultimately, I chose the midpoint of the axis as the heuristic for the splitting point. With the extent function and minimum functions, I have the range and the minimum point for an axis, so calculating the midpoint is simply the minimum plus range/2. Then, every primitive on the left side of the midpoint is placed into the left node, and all others go to the right node. Recursively running this algorithm with these nodes and stopping when the number of primitives is below a given limit, BVH will speed up the rendering process.</span></p><p class="c1"><span class="c0">Rendering Times (seconds): Cow: 19.3411 (no BVH) & 15.3183 (BVH), Teapot: 8.0354 (no BVH) & 5.1314 (BVH), Max Planck: 210.8908 (BVH), Peter: 173.3340 BVH)</span></p><p class="c2"><span style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 307.20px; height: 230.40px;"><img alt="" src="images/image5.png" style="width: 307.20px; height: 230.40px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);" title=""></span><span style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 307.20px; height: 230.40px;"><img alt="" src="images/image4.png" style="width: 307.20px; height: 230.40px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);" title=""></span></p><p class="c2"><span style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 307.20px; height: 230.40px;"><img alt="" src="images/image2.png" style="width: 307.20px; height: 230.40px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);" title=""></span><span style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 307.20px; height: 230.40px;"><img alt="" src="images/image7.png" style="width: 307.20px; height: 230.40px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);" title=""></span></p><p class="c1"><span class="c0">Part 3: Direct Illumination</span></p><p class="c2"><span class="c0"> The image seen below is the output of the CBbunny after my attempt at implementing uniform hemisphere sampling with direct lighting only. While not correct, there is a slight outline of a bunny that can be seen indicating I was on the correct path. Likely, the bug is somewhere with how the light bounces are being calculated, but unfortunately I could not pinpoint the exact issue after hours of debugging.</span></p><p class="c2"><span class="c0"> The two main functions here are being able to calculate uniform hemisphere sampling and also light importance sampling. Uniform hemisphere sampling uses a Monte Carlo integration to estimate the amount of incoming light into a hemisphere, and then calculates the outgoing light ray with the reflection equation from the amount of incoming light. Coming back to Monte Carlo, it will uniformly sample the incoming illumination around the point of interest, hit_p, and divide by the number of samples to get an estimate.</span></p><p class="c2"><span class="c0"> Light importance sampling works by not considering bounces and thus will be simpler. To create this, I would only consider direction between all light sources and the hit_p in question. If there exists a direct path, sampling is done to determine incoming light and then an outgoing ray can be made with the reflection equation.</span></p><p class="c1"><span style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 547.20px; height: 410.40px;"><img alt="" src="images/image8.png" style="width: 547.20px; height: 410.40px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);" title=""></span></p><p class="c1"><span class="c0">Part 4: Global Illumination</span></p><p class="c2"><span class="c0"> Due to bugs with how light is being calculated in presumably the previous section, I could not generate images for the global illumination section. However, I will explain how I would plan to implement this. Each ray of light will bounce N times off of other objects. To estimate the global illumination rays, I would form the single bounce function, then recursively call upon itself until the max ray depth is reached. With recursion, I would need to redefine the previous outgoing ray as the new incoming ray for the next intersection. Otherwise, the implementation of ray tracing is fairly similar to direct illumination functions.</span></p><p class="c2"><span class="c0"> Next, global illumination with russian roulette is implemented in an effort to reduce the computational power required to render images. Normally, rays stopped being recursed at a defined constant max ray depth, also known as number of bounces. With russian roulette, each ray will now stop being considered at a random length less than max ray depth. This may produce a slightly different image each time, but overall will not have a great effect on how the image looks. It is relatively simple to add this function, just a random number generating function that determines the probability of early termination.</span></p><p class="c1"><span class="c0">Part 5: Adaptive Sampling</span></p><p class="c2"><span class="c0"> Similar to the previous section, I will walk through how I would’ve coded the adaptive sampling section of this project. Adaptive sampling aims to work in areas where the Monte Carlo path tracing lacked. With the previous method, a very high amount of samples are taken per pixel. In real images, some sections are easy to render and other parts are more difficult depending on how many rays pass through a given pixel. With each sample that passes through a pixel, it has a given RGB value which defines color or illuminance. By summing these and keeping track of the number of samples that pass through a pixel, an adaptive sample version of an image can be created where the color at each pixel is a function of the weighted sum of RGB values and the number of samples present in the area. The end result is a similar looking image resembling an infrared version of the images produced earlier in this project.</span></p></body></html> |