Cs184 Homework Help

On By In 1

CS184 - Introduction to Mobile Application Development (using Android)

Fall 2017


This home page and the CS184 Piazza forum will be used as centers of communication for the class. Homework submission will occur through the CS184 GauchoSpace Make sure you are enrolled!

While the webpage provides you with up-to-date information about assignments and what is currently going on in class, homeworks will be submitted via GauchoSpace, and the Piazza Forum serves as an open forum: questions, answers, suggestions, etc.


General Information


Course Description

This course provides an introduction to developing applications for the Android mobile ecosystem.

Over the past 20 years, the use of information technology has undergone a clear transition from stationary office and desktop computing to mobile computing. This development was accompanied by the emergence of networked and social computing. The Sales of smartphones and tablet computers have by far outpaced the sales of conventional desktop PCs for years now. The way the young generation today obtains computer literacy has changed: Apps and cloud computing have replaced desktop computing in many cases. Computing has shifted from office or home office work to an anywhere-and-anytime activity.

This course aims to prepare students for this extraordinary shift in commercial and societal focus. The possibilities of mobile device software development are endless. In this introductory course, we get your curiosity started and prepare you for more advanced app development.

Students will apply their gained knowledge in a series of practical assignments using the Android ecosystem that highlight selected portions of the software design cycle, as well as familiarize them with sound programming practices and effective tools and techniques to create successful applications. The course will also touch upon novel interaction concepts that go beyond what we normally see in today's mobile apps.


Course Requirements and Grading

This class teaches the theory and practice of effective software design for Android. You will learn about principles, procedures, and programming approaches. You will create, iterate, and evaluate interaction designs.

As this is only the second time this course is offered, the exact course requirements will emerge over the first week, once the instructor has been able to form an idea about students' backgrounds and expectations.

There will definitely be a series of design and implementation assignments that lead up to individual or group class projects. There _may_ be one exam (in week 8 or 9). We will continuously assign reading and tutorial material from online resources, which is supposed to help your design efforts and to stimulate class participation. Here is how your final grade will be determined:  

  • Exam (if held, then 30-40%)
  • Up to 6 (likely 5) one to two - week homework assignments (tentatively 60%; if there is an exam then 40%)
  • Final project implementation, documentation, and presentation (tentatively 40%; if there is an exam, then 25%)
  • Class participation and activities [presentations on side topics, links and answers on Piazza, demos, etc.] will be noted and can positively influence your grade

In case you disagree with any grade, submit your grievance in writing (email or paper)to the grader responsible, explaining and documenting your case.

Lateness Policy

All assignments are due at midnight on the scheduled due date. To make the deadlines more manageable, each student will be allowed four "late days" during the quarter for which lateness will not be penalized. Late days may be applied to all assignments, including design sketches and programming assignments, but not the final project! Your  late days may be used as you see fit -- one or multiple per assignment -- but once you used a late day it's good and gone, you cannot reapply it to another assignment. Anything turned in after 12:00:00am until midnight the next day is one day late. Every day thereafter that an assignment is late, including weekends and holidays, counts as an additional late day.

Absolutely no late work will be accepted after the deadline if you have used up all your late days. If you're not done on time you must turn in what you have to receive partial credit. There will be no exceptions from this rule. Please make sure you understand this policy.

When making use of your late days, the online submission provides the timestamp that counts. 

Academic Misconduct

We will strictly enforce UCSB's academic misconduct policies. We use electronic tools to detect plagiarism among submitted homework solutions and sources from the internet. Read these guidelines before beginning each programming assignment. Any form of  plagiarism, collusion, or cheating will result in an "F" in this course and may result in suspension from UCSB for two quarters. When in doubt about any forms of receiving help on your assignments, ask us!

Open Door Policy

I would like the course to be informative and enjoyable. Let us know what you find just, good and interesting about the course. Let us know sooner if you feel something could be improved. See us, send an e-mail, or leave us a note.

Class Materials

See the handout column in the class schedule!

Book/Materials

Other Sources

 

Class Schedule

Wk

Class
/ Dis

DateAssigned
Reading
TopicsHandout HW
out
HW due
 1C1MonOct 2

---

Introduction, Motivation,
Class Requirements, Policies

Android: Overview, Historic Context, Discussion

S0: Java Review

H1: Android Studio (AS) Tour Chapter

S1: Slides: Android Intro

Student Questionnaire 
D1WedOct 4 

Hand out
Nexus Tablets

Hello Android
Android Studio

HW1

H2: AS Designer Chapter
H3: Interactive UI Design in AS
H4: Programmatical UI Creation in AS

S2: Slides: Hello Android

HW1Student Questionnaire
C2FriOct 6H1, H2

HW 1 Q&A

Activities
Activity Life Cycle 
Overview: Views / Layouts
(Virtual Machines)


S3: Activity Lifecycle

BM1: Slides: Virtual Machines
  
2C3MonOct 9H3, H4
(HW1)


State Saving

Layouts

Constraint Layout

H5: Android Event Handling Chapter

H6: Activity States and State Saving

S4: StatePreferences

S5: Layouts
S6: Constraint Layout

  
D2WedOct 11H5, H6

HW2

Event Handling

2D Graphics (canvas, paint)

S7: 2D Graphics

H7: Views/Layout Chapters

H8: ConstraintLayout Chapters

H9: Touch and Multi-Touch

HW2HW1
C4FriOct 13H7, H8, H9

HW2 Q&A

Color Array

AppBar (Toolbar)

S8: ActionBar/AppBar/Toolbar

H10: App Bar, Overflow Menus

  
3C5MonOct 16H10

 

2D Graphics <cont.>
(animation, threads)

Fragments

S9: Fragments

H11: Fragment Chapters

  
D3WedOct 18H11


HW3
NavigationDrawer

Demo of HW3 components

S10: Speech to Text and Text to Speech

S11: Mediaplayer

HW2
C6FriOct 20 


Debugging
HW3 Q&A

 HW3 
4C7MonOct 23 

Intents

Internet Resource Retrieval

Quarter Project

S12: Intents  
D4WedOct 25 


Icons
Lists
Debugging


S13: Lists  
C8FriOct 27 


Storage/Permissions
Permission Handling


S14: Storage/Permissions
S15: Camera Access
HW4HW3
5C9MonOct 30 

Camera

RecyclerView

Databases

S16: SQLite Database Support

H12: SQLite Database Chapters

Android SQLite Tutorial

 
Project Idea
D5WedNov 1H12


HW4 Q&A
Project Discussion

S17: HW4 Discussion  
C10FriNov 3 


Non-SQL Databases Firebase



S18: Remote Databases / Firebase

H13: Notification Chapters

H14: Firebase Chapters

  
6C11MonNov 6H13, H14

Firebase (cont.)

Google Maps

HW5

S19: Google Maps

H15: Google Maps Chapter

 HW4
 TueNov. 7   


HW5

 
D6WedNov 8H15

 

HW5 Q&A
HW5 Discussion

Notifications
RESTful APIs

S19: Notifications  
 FriNov 10No classVeteran's Day

   
7C12MonNov 13 


GPS / Location Sensors

Services

Projects

   
D7WedNov 15 

 

Useful Libraries
Demos (RESTful APIs and libraries)

  HW5
C13FriNov 17 


Android vs. IOS


   
8
C14MonNov 20 

 

Cross-platform Development

React.Native

 

S20: Cross-Platform Development  
C15WedNov 22 


Projects

Design Patterns I

Augmented Reality
(incl. example code & demo)

   
 FriNov 24No class

Thanksgiving Break

   
9C16MonNov 27 


Services

Design Patterns

Model View Controller

S21: Services  
D9WedNov 29 

 

Android Design Patterns

Multi-Window Support

S22: Design Patterns  
C17FriDec 1 


Projects

   
10C18MonDec 4 

Behind the Scenes:

Gradle
AOSP

S23: Gradle  
D10WedDec 6 

Help with
- and Feedback on -
Final Projects


  
C19FriDec 8 

 

The Future of Android

   

11

 

12

Final
Slot

Mon
 
Dec 114-7pm
Project
Presentations
   
...............   
 WedDec 20 
Project Materials Due


   

 

Assignments

  • For Wed, Oct 4: If you haven't filled in a Student Info Form during lecture 1, please fill it in and email or hand it in!

  • HW assignment 1 is due Wednesday, Oct 11, 23:59:59. Use Gauchospace to submit your solutions ! 
  • HW assignment 2 is due Friday, Oct 20, 23:59:59. Use Gauchospace to submit your solutions ! 
  • HW assignment 3 is due Friday, Oct 27, 23:59:59. Use Gauchospace to submit your solutions ! 
  • HW assignment 4 is due Tuesday, Nov 7, 23:59:59. Use Gauchospace to submit your solutions ! 
  • HW assignment 5 is due Friday, Nov. 17, 23:59:59. Use Gauchospace to submit your solutions ! 

  • Quarter Project idea is due Monday, Oct 30th, 23:59:59. Use Gauchospace to submit your document! Everyone needs to submit, but if you have already formed a group, members can submit the same document!  

  • You final Quarter Project materials are now due Wednesday, Dec. 20th, 23:59:59. Use Gauchospace to submit your materials. Only one submission is needed per group. Please coordinate which team member will do this submission.



For questions, please contact the instructor and/or TA

Ray generation, intersection, acceleration, direct and indirect lighting and some materials to boot. We built a path tracer for our third project in CS184 at UC Berkeley. Let’s get started!

Part 1: Ray Generation and Intersection

Ray Generation

To begin, it's important to note here that we are generating an image as if it was taken through a pinhole camera. A pinhole camera has all incoming light rays pass through a single point onto the picture plane. Normally this results in an upside-down image, as shown below:

By moving the image plane in-front of the pinhole, we can achieve a correctly oriented image. To do this, we cast rays (from the pinhole) through the image plane, into the scene. We then render the image plane as output. The diagram below shows this setup:


To Generate rays, we implement two functions: PathTracer::raytrace_pixel and Camera::generate_ray

in Camera::generate_ray(x,y) we do the work of creating Ray objects. Ray objects need both their origin and their direction. Since these rays are coming from our camera, their origin is a single point in space (hence, why this is a pinhole system). The parameters x and y are used to determine the direction of the ray. We want to transform the coordinates x,y (both of which are within the range [0,1]) to a point on our picture plane. To do this, we want input points (0,0) to map to the bottom left of the image plane, and input points (1,1) to map to the top right of the image plane. This can be accomplished by a simple interpolation equation:

Ray_Direction = Vector3D( bottom_left.x + (x * difference.x),bottom_left.y + (y * difference.y),-1);

where difference is a Vector3D defined by:

difference = top_right - bottom_left;

top_right and bottom_left are also Vector3D objects. For sake of brevity, let’s examine x. When x = 0:

Ray_Direction.x = bottom_left.x + (0 * difference.x) = bottom_left.x;

and when x = 1:

Ray_Direction.x = bottom_left.x + (1 * difference.x) = bottom_left.x + top_right.x - bottom_left.x = top_right.x;

For the situations when 0 < x < 1, we get a nice linear interpolation between bottom_left.x and top_right.x. The same goes for all y values of Ray_Direction. We want our rays to go along the -Z axis (since our openGL libraries use this as “into the screen” in their coordinate system).

With that, we can effectively create rays starting at a defined origin and passing through our picture plane.

We then use these rays in PathTracer::raytrace_pixel(x,y). The input parameters are coordinates of the bottom left of the pixel we want to retrace through. Here, we basically either:

Cast a single ray through the center of the pixel or we sample ns_aa rays through this pixel at random locations. ns_aa is a parameter we provide when running the program from the command line.

Now we’re casting rays into our scene, but we’re sending them out into a vacuum. We need to define some objects that our rays are going to interact with.

Scene Intersection

Right, so this is the picture we get without scene intersection… makes sense, since we aren’t intersection with anything at the moment. It’s time to introduce some primitives: triangles and spheres.

Triangle Intersection

This is all about one very important algorithm: the Möller-Trumbore algorithm.

To begin, we’ve used barycentric coordinates in previous projects to determine wether a point P lies inside the triangle. This is characterized by the equation:

P = Aw + Bu + Cv

Where w,u,v are barycentric coordinates and A,B,C are vertices of the triangle we are trying to test intersection with. We use the fact that w + u + v = 1 to derive the following:

P = A(1-u-v) + Bu + Cw = A − Au − Av + uB + vC = A + u(B−A) + v(C−A)

So we have P in terms of a single vertex, A, two edges, B-A and C-A, and two barycentric coordinates, u and v.

With our ray equation, we have another equation for triangle intersection:

P = O + tD

where O is the ray’s origin, D is its direction, and t is the ray’s travel time. Combining these two equations yields:

O + tD = A + u(B−A) + v(C−A)

moving terms around gives:


O - A = -tD + u(B-A) + v(C-A)

We seek u,v since we can use the barycentric coordinates to determine whether or not the point P lies within the triangle. We also seek t, since we’ll be using the intersection time with the triangle in many places throughout this project. These fit nicely in the following matrix equation:

This part seems like magic,but basically we can use Crammer’s rule. In my own words this basically is a theorem that says we can obtain solutions to the linear equation by multiply the determinant of the matrix by the diagonal of a 1x3 as shown:

It’s given that a 1x3 matrix has the following determent:

which finally gives us:

Phew… that was a mouthful. Anyways, heres what the code looks like:

Vector3D e_1 = p2 - p1; Vector3D e_2 = p3 - p1; Vector3D T = r.o - p1; Vector3D s_1 = cross(r.d, e_2); Vector3D s_2 = cross(T, e_1); double det = 1.0 / dot(s_1,e_1); //compute t double t = det * (dot(s_2,e_2)); //compute u double u = det * (dot(s_1,T)); //compute v double v = det * dot(s_2,r.d);

The ray intersects the triangle iff the following conditions hold:

  • t is between min_t and max_t
  • u is between 0 and 1
  • v is more than 0 and u + v is less than one

With all that said and done, we are rewarded with the following picture:

Sphere Intersection

We use the quadratic formula to determine whether or not a ray intersects a sphere. the values for the quadratic are:

a = ray.d * ray.d; b = 2(o-c)/dot ray.d; c = (o-c)^2 - R^2;

Where:

  • ray.d is the rays direction
  • o is the rays origin
  • c is the center of the sphere
  • R is the radius of the sphere

Since there can be 0, 1, or 2 solutions to the quadratic equations, there are 3 scenarios for a spherical intersections:

  • no solution = The ray misses the spheres
  • 1 solution = the ray hits the sphere at 1 point (tangent to a single point)
  • 2 solutions = the ray enters the sphere at one point, and exits at another.

when there are 2 solutions, There’s a subtly as to which of these solutions we assign to our rays hit time. In most cases, we will want the smallest value, since this is when the light ray first hits the sphere. There is a case where we want the larger hit time, but we’ll get to that later in the right up.

With the following spherical intersection implemented, we are treated to the following:

Putting both pictures together gives the following scene:

Part 2: BVH Construction and Intersection

A BVH, or Bounding Volume Hierarchy, does just that: It places the objects into the scene into a structure (in this case, a binary tree) of rectangular volumes. This allows us to test ray intersection with axis-aligned planes, rather than every single primitive in the scene.

Construction

To construct a BVH we need to select a few heuristics to do our splits. It would be ideal to pick values that reduce ray intersection time by the greatest amount. Instead, I opted for a cleaner (and easier) implementation.

At every split, we choose an axis to split on (either x,y or z). In the case for this bounding box, we choose the longest axis. So if:

extent.x > extent.y and extent.x > extent.z

We split the x axis. Same goes for extent.y and extent.z. Also, extent is a nice helper function that returns the length of a bounding boxes axis… extent is a fitting name.

So we’ve chosen an axis to split on… great, but now we must choose a point on the axis to do the splitting? Well, just use the midpoint.

The following pictures shows a couple steps of the splitting of bounding volumes:

Theres an important catch to consider, which is when all primitives fall to either the left or right of the split point. This would be bad, and result in an infinite loop (as we are not getting smaller). If this is the case, just return… there’s nothing more to do… or actually, there could be more… but we are using the centroids of our primitives to split, so it's rare that it will be the case all all primitives fall to a single side until there are few left.

Intersecting of Bounding Boxes

This may seem like a tangent, but it's important and easy. If we consider each BVH as a bounding box with either:

  • A list of primitives
  • OR

  • A left and right child which are also BVH’s

We can use the bounding boxes to test if a ray misses. If a ray misses, we have an easy out… which is great.

To check if a ray intersects a bounding box, it suffices to take the intersection of 3 axis aligned planes. A ray intersects a bounding box if the union of the 3 intersection intervals is non-zero

Intersecting BVHs

This part is very straightforward. To test if a ray intersects a BVH, first check to make sure it hits the BVH’s bounding box. If it doesn’t, we win! And by winning, I mean we don't have to check against every primitive. This is great, and again, why BVH’s are amazing.

If it does hit a BVH, well, we have to then check to if this BVH is a leaf (aka it has a list of primitives we have to check against) or it is a node (aka it has a left and right BVH).

If it is a leaf, well, we have to check against all the primitives. We the run the primitive intersection code described above.

If it is a node, we recursively check intersection in the left and right BHV, being careful to return the smallest hit value, if it returns true for both.

Once we have BVH construction and intersecting up, we can render some larger .dae files below:

Part 3: Direct Lighting

For this part, we are interested in the light that is contributed from only the light source in the scene. We run this on every Ray that returns a hit point.

Before we sample the light, we first check to see if the light source is an area light or a point light.

If it is a point light, it doesn’t matter how many samples we take of the light source, they’re always going to fall on the same place. So we just take a single light sample.

If it is an area light, then we take as many samples as we’re told (and we’re told through the command line when running the program). In this case, we get an RGB value for each light sample.

From each of these light samples, we get a outgoing vector. We use this vector to determine wether or not we can actually reach the light. This means, we create a single shadow ray starting at the hit_point we are trying to color and test to see if the shadow ray hits any other objects before it reaches the position on the light we sampled. If the shadow ray hits an object, then we mark this sample as black, and continue.

If our shadow ray doesn't intersect with any scene objects (and therefore reaches the light). Then we add the value of this hit points BSDF (which we will talk about in part 5) multiplied by the contribution on the light we sampled. We take in mind to also multiply by the cos of the outgoing angle, since the irradiance on a surface point drops as the angle of the light and normal to the surface point move towards 90 degrees.

Also, when summing up these sample values, we must divide each sample by its probability! This is important, since low probability samples should, in theory, contribute less to the overall irradiance, and high probabilities should contribute more.

In the end, we divide by the total number of samples, and return the irradiance for each Ray hit point.

Here are some pictures using only direct lighting:

Notice, that all shadows are quite hard. There seems to be a little blending at the edges, but this is due to the probability of our samples reducing the value of our BSDF sample. If we didn’t multiply by the probability of the light sample, we would get the full BSDF sample value at these locations (no greying).

Here is the bunny with 2 area light samples:

It's hard to see the difference. The shadow cast by the tail seems to be getting a bit softer in contrast between black and grey shadow.

The shadows are getting much better resolution now. Since we were averaging 5 samples.

20 light samples later, I think we’re getting some smooth shadows

Part 4: Indirect Lighting

Indirect illumination follows a similar principle as direct illumination. Again, we are interested in the irradiance at a Ray’s hit point. However, in this case, we sample the BSDF of our hit point first, and then recursively cast another ray leaving this hit point and add it’s value to our irradiance.

This sounds like it would take forever right? Well, it would, except we implement a Russian Roulette (classy name) algorithm that randomly stops taking Indirect Lighting samples per hotpoint based on the illuminance of the BSDF value. This means that if the BSDF doesn't contribute much irradiance, we should have a high chance of terminating it, since this hit point isn't contributing much light anyways.

We also implement a maximum ray depth, which is an assurance that our rays will terminate after a certain amount of bounces.

Again, we make sure to weigh each irradiance value (which is the BSDF sample and the recursive Indirect Lighting sample) by the cosine of the outgoing sample angle, and both the outgoing angle probability and 1 - Russian Roulette probability.

Below is a scene of lambertian spheres, rendered first with only direct lighting, and then, only indirect lighting:

In the direct case, we get no color from the walls on either spheres or the ground, which makes sense, since no indirect light from the walls will hit the spheres in the indirect case. We get much tighter, softer shadows in the indirect illumination example compared to the intense direct light shadows in the other example. The black and white light are stark differences, but their sum would result in a white light (also no light reflects back onto the ceiling for direct illumination), adding the two images gives the following (both direct and indirect):

Here are the same spheres, but rendered with max depth ranging from 1,2,3,4,5 and 100:

Can't really see a difference? Neither can I. Maybe it will become more apparent in the next part when we have some materials.

Below is a progression of scene above ranging from 1 sample/pixel to 1024 samples/pixels.

Part 5: Materials

Here, we implement both a BSDF for mirrors and glass. But first, what’s a BSDF?
 A BSDF is a Bidirectional Scattering Distribution Function (it makes sense why we abbreviate to BSDF). It is a function that returns an irradiance given the incoming and outgoing rays of light. This is a material property, so different materials will have different functions for distributing light.

So we did a BSDF for a mirror and for glass, and therefore, had to implement a reflection and a refraction transformation, respectively.

For a mirror, a reflection is needed. Here’s a nice fact, since our hit points are ever so small, we can assume that they all have a normal that is simply (0,0,1). We use this to reflect an outgoing vector wi, like so:

wi = (-wo.x, -wo.y, 1)

Easy. We then take that vector, wi, and return the irradiance:

reflectance / cos(wi)

Reflectance is the material property, and we want divide by cos(wi) to negate the multiplication done in part 4.

For glass, we need refraction. Again, since we always have a surface normal of (0,0,1):

phi_o = -phi_i

This will come in handy. Here’s a diagram:

So in refraction, an incoming vector has an angle theta_i with normal point away from the blue material. This vector, we will call it wo is refracted when it enters the material by an angle theta_i. So we seek to find the outgoing vector, call it wi, given wo. no and ni are constants determined by the material. In this diagram, n_o is considered the glass and n_i is the air.

Snells law states:

sin(theta_i) = (n_o sin(theta_o)) /n_i)

Lets consider the x coordinate of w_i from polar to cartesian coordinates (the y coordinate will behave the same way)

wi.x = sin(theta_i) + cos(phi_i)sin(theta_i) = wi.x - cos(phi_i)

same goes for sin(theta_o) specifically:

sin(theta_o) = wo.x - cost(phi_o)

Using Snells law, we can substitute our sin(theta) terms out for terms involving our cartesian coordinates:

wi.x - cos(phi_i) = (n_o/n_i)

We also know that, as stated above:

phi_o = -phi_i

So...

cos(phi_o) = cos(-phi_i) = cos(phi_i)

...and...

wi.x - cos(phi_i) = no/ni wo.x -cos(phi_o)

...finally!

wi.x = no/ni wo.x

In this project, we have wo pointing away from the surface. We want our wi to point into the glass. so we have to negate all our cartesian coordinates. This yields:

wi.x = -(no.ni) * wo.x

Same goes for wi.y

As for the wi.z, we know that wi.z = cos(theta_i) since our normals are always (0,0,1)… but we only have sin(theta_i), well thank goodness for trig again:

cos(theta_i) = sqrt(1-sin^2(theta_i))

We give our wi.z the inverse of wo.z, again, because of our conventions in this project.

Okay so thats refraction. It's also important to note that sin(theta_i) can never be greater than 1 (That would defy the laws of mathematics!) If Snells law results in sin(theta_i) > 1, then we should reflect rather than refract.

Almost done. Since Glass doesn't just refract (it also reflects) we need to have a probabilistic way of determining whether an incoming ray reflects or refracts. The simple solution is to use the constant, R, yielded for Shlicks approximation to determine wether we reflect or refract given an incoming light direction.

If it reflects, we return:

reflectance * R / abs_cos(wi)

If it refracts, we return:

Transmittance * (1-R) * (no/ni)^2 / abs_cos(wi)

Here, abs_cos(wi) is needed because we want all our surface normals to point away from the material, instead of into them, as is our project convention.

Now we can render both mirrored and glass surfaces. The following sequence is of a max depth of 1,2,3,4,5 and,100:

The first image is interesting, the light bounces off the mirrored surfaces but light that refracts inside the sphere we cannot see! This is because we terminate the ray before it can leave the sphere.

Remember way back when we did sphere intersection? This is that one time where we need the second ray hit point. When the ray leaves the sphere, it leave at the second intersection point (meaning that the ray begins somewhere inside the sphere). We see this in the second picture, when the max_ray_depth = 2.

In the third picture in the series, light leaving the glass sphere arrives at the ground (creating a caustic) and it also arrives at the mirrored sphere (it appears blue in the mirror sphere). Also, notice that at this depth, light that hit the ground has reflected back up into the sphere, resulting in a tiny white highlight on the glass spheres bottom right.

When the depth = 4, we see this highlight reflected onto the blue wall.

From depth = 5 to about depth = 100, we only really see the scene get lighter, as light bounces around the entire scene until it is either stopped by depth (with depth closer to 5) or by Russian roulette (with depth closer to 100)

Heres a high quality version of the mirror and glass spheres:

Here are some more high-res images:

Here's a golden dragon in a box, ranging from 1 sample/pixel to 1024 samples/pixel

That concludes this walkthrough of the path tracer for CS184 Assignment 3. Thanks for reading!

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *