Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
Ray Tracing In One Weekend Book Series
====================================================================================================

| ![Ray Tracing in One Weekend][cover1] | ![Ray Tracing: The Next Week][cover2] | ![Ray Tracing: The Rest Of Your Life][cover3]
|:------------------:|:-----------------:|:-------------------------:|
| [In One Weekend][] | [The Next Week][] | [The Rest Of Your Life][] |
| ![RT in One Weekend][cover1] | ![RT The Next Week][cover2] | ![RT The Rest Of Your Life][cover3] |
|:----------------------------:|:---------------------------:|:-----------------------------------:|
| [In One Weekend][] | [The Next Week][] | [The Rest Of Your Life][] |


Getting the Books
Expand Down Expand Up @@ -32,10 +32,10 @@ review the [CONTRIBUTING][] document for the most effective way to proceed.
[cover1]: images/RTOneWeekend-small.jpg
[cover2]: images/RTNextWeek-small.jpg
[cover3]: images/RTRestOfYourLife-small.jpg
[In One Weekend]: InOneWeekend
[In One Weekend]: books/RayTracingInOneWeekend.html
[releases]: https://github.com/RayTracing/raytracing.github.io/releases/
[Hack the Hood]: https://hackthehood.org/
[Real-Time Rendering]: https://realtimerendering.com/#books-small-table
[submit issues via GitHub]: https://github.com/raytracing/raytracing.github.io/issues/
[The Next Week]: TheNextWeek
[The Rest Of Your Life]: TheRestOfYourLife
[The Next Week]: books/RayTracingTheNextWeek.html
[The Rest Of Your Life]: books/RayTracingTheRestOfYourLife.html
71 changes: 37 additions & 34 deletions books/RayTracingInOneWeekend.html
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
<meta charset="utf-8">
<!-- Markdeep: https://casual-effects.com/markdeep/ -->



**Ray Tracing in One Weekend**
Peter Shirley
Expand Down Expand Up @@ -63,7 +66,7 @@
write it to a file. The catch is, there are so many formats and many of those are complex. I always
start with a plain text ppm file. Here’s a nice description from Wikipedia:

![Image 1-1: PPM Example](../images/img-1-01-1.jpg)
![Image 2-1: PPM Example](../images/img-1-02-1.jpg)

Let’s make some C++ code to output such a thing:

Expand Down Expand Up @@ -105,7 +108,7 @@
Opening the output file (in ToyViewer on my mac, but try it in your favorite viewer and google “ppm
viewer” if your viewer doesn’t support it) shows:

![Image 1-2](../images/img-1-01-2.jpg)
![Image 2-2](../images/img-1-02-2.jpg)

Hooray! This is the graphics “hello world”. If your image doesn’t look like that, open the output
file in a text editor and see what it looks like. It should start something like this:
Expand Down Expand Up @@ -329,7 +332,7 @@
front of $A$, and this is what is often called a half-line or ray. The example $C = p(2)$ is shown
here:

![Figure 3-1](../images/fig-1-03-1.jpg)
![Figure 4-1](../images/fig-1-04-1.jpg)

The function $p(t)$ in more verbose code form I call “point_at_parameter(t)”:

Expand Down Expand Up @@ -369,7 +372,7 @@
sides to move the ray endpoint across the screen. Note that I do not make the ray direction a unit
length vector because I think not doing that makes for simpler and slightly faster code.

![Figure 3-2](../images/fig-1-03-2.jpg)
![Figure 4-2](../images/fig-1-04-2.jpg)

Below in code, the ray $r$ goes to approximately the pixel centers (I won’t worry about exactness
for now because we’ll add antialiasing later):
Expand Down Expand Up @@ -418,7 +421,7 @@

with $t$ going from zero to one. In our case this produces:

![Image 3-1](../images/img-1-03-1.jpg)
![Image 4-1](../images/img-1-04-1.jpg)



Expand Down Expand Up @@ -464,7 +467,7 @@
(meaning no real solutions), or zero (meaning one real solution). In graphics, the algebra almost
always relates very directly to the geometry. What we have is:

![Figure 4-1](../images/fig-1-04-1.jpg)
![Figure 5-1](../images/fig-1-05-1.jpg)

If we take that math and hard-code it into our program, we can test it by coloring red any pixel
that hits a small sphere we place at -1 on the z-axis:
Expand All @@ -490,7 +493,7 @@

What we get is this:

![Image 4-1](../images/img-1-04-1.jpg)
![Image 5-1](../images/img-1-05-1.jpg)

Now this lacks all sorts of things -- like shading and reflection rays and more than one object --
but we are closer to halfway done than we are to our start! One thing to be aware of is that we
Expand All @@ -510,7 +513,7 @@
as are most design decisions like that. For a sphere, the normal is in the direction of the hitpoint
minus the center:

![Figure 5-1](../images/fig-1-05-1.jpg)
![Figure 6-1](../images/fig-1-06-1.jpg)

On the earth, this implies that the vector from the earth’s center to you points straight up. Let’s
throw that into the code now, and shade it. We don’t have any lights or anything yet, so let’s just
Expand Down Expand Up @@ -549,7 +552,7 @@

And that yields this picture:

![Image 5-1](../images/img-1-05-1.jpg)
![Image 6-1](../images/img-1-06-1.jpg)

Now, how about several spheres? While it is tempting to have an array of spheres, a very clean
solution is the make an “abstract class” for anything a ray might hit and make both a sphere and a
Expand Down Expand Up @@ -725,7 +728,7 @@
This yields a picture that is really just a visualization of where the spheres are along with their
surface normal. This is often a great way to look at your model for flaws and characteristics.

![Image 5-2](../images/img-1-05-2.jpg)
![Image 6-2](../images/img-1-06-2.jpg)



Expand Down Expand Up @@ -783,7 +786,7 @@
For a given pixel we have several samples within that pixel and send rays through each of the
samples. The colors of these rays are then averaged:

![Figure 6-1](../images/fig-1-06-1.jpg)
![Figure 7-1](../images/fig-1-07-1.jpg)

Putting that all together yields a camera class encapsulating our simple axis-aligned camera from
before:
Expand Down Expand Up @@ -850,7 +853,7 @@
Zooming into the image that is produced, the big change is in edge pixels that are part background
and part foreground:

![Image 6-1](../images/img-1-06-1.jpg)
![Image 7-1](../images/img-1-07-1.jpg)



Expand All @@ -869,7 +872,7 @@
direction randomized. So, if we send three rays into a crack between two diffuse surfaces they will
each have different random behavior:

![Figure 7-1](../images/fig-1-07-1.jpg)
![Figure 8-1](../images/fig-1-08-1.jpg)

They also might be absorbed rather than reflected. The darker the surface, the more likely
absorption is. (That’s why it is dark!) Really any algorithm that randomizes direction will produce
Expand All @@ -880,7 +883,7 @@
Pick a random point s from the unit radius sphere that is tangent to the hitpoint, and send a ray
from the hitpoint $p$ to the random point $s$. That sphere has center $(p + N)$:

![Figure 7-2](../images/fig-1-07-2.jpg)
![Figure 8-2](../images/fig-1-08-2.jpg)

We also need a way to pick a random point in a unit radius sphere centered at the origin. We’ll use
what is usually the easiest algorithm: a rejection method. First, we pick a random point in the unit
Expand Down Expand Up @@ -914,7 +917,7 @@

This gives us:

![Image 7-1](../images/img-1-07-1.jpg)
![Image 8-1](../images/img-1-08-1.jpg)

Note the shadowing under the sphere. This picture is very dark, but our spheres only absorb half the
energy on each bounce, so they are 50% reflectors. If you can’t see the shadow, don’t worry, we will
Expand All @@ -936,7 +939,7 @@

That yields light grey, as we desire:

![Image 7-2](../images/img-1-07-2.jpg)
![Image 8-2](../images/img-1-08-2.jpg)

There’s also a subtle bug in there. Some of the reflected rays hit the object they are reflecting
off of not at exactly $t=0$, but instead at $t=-0.0000001$ or $t=0.00000001$ or whatever floating
Expand Down Expand Up @@ -1079,7 +1082,7 @@
For smooth metals the ray won’t be randomly scattered. The key math is: how does a ray get
reflected from a metal mirror? Vector math is our friend here:

![Figure 8-1](../images/fig-1-08-1.jpg)
![Figure 9-1](../images/fig-1-09-1.jpg)

The reflected ray direction in red is just $(v + 2B)$. In our design, $N$ is a unit vector, but $v$
may not be. The length of $B$ should be $dot(v,N)$. Because $v$ points in, we will need a minus
Expand Down Expand Up @@ -1171,12 +1174,12 @@

Which gives:

![Image 8-1](../images/img-1-08-1.jpg)
![Image 9-1](../images/img-1-09-1.jpg)

We can also randomize the reflected direction by using a small sphere and choosing a new endpoint
for the ray:

![Figure 8-2](../images/fig-1-08-2.jpg)
![Figure 9-2](../images/fig-1-09-2.jpg)

The bigger the sphere, the fuzzier the reflections will be. This suggests adding a fuzziness
parameter that is just the radius of the sphere (so zero is no perturbation). The catch is that for
Expand Down Expand Up @@ -1205,7 +1208,7 @@

We can try that out by adding fuzziness 0.3 and 1.0 to the metals:

![Image 8-2](../images/img-1-08-2.jpg)
![Image 9-2](../images/img-1-09-2.jpg)



Expand All @@ -1220,7 +1223,7 @@
there is a refraction ray at all. For this project, I tried to put two glass balls in our scene, and
I got this (I have not told you how to do this right or wrong yet, but soon!):

![Image 9-1](../images/img-1-09-1.jpg)
![Image 10-1](../images/img-1-10-1.jpg)

Is that right? Glass balls look odd in real life. But no, it isn’t right. The world should be
flipped upside down and no weird black stuff. I just printed out the ray straight through the middle
Expand All @@ -1233,7 +1236,7 @@
Where $n$ and $n'$ are the refractive indices (typically air = 1, glass = 1.3–1.7, diamond = 2.4)
and the geometry is:

![Figure 9-1](../images/fig-1-09-1.jpg)
![Figure 10-1](../images/fig-1-10-1.jpg)

One troublesome practical issue is that when the ray is in the material with the higher refractive
index, there is no real solution to Snell’s law and thus there is no refraction possible. Here all
Expand Down Expand Up @@ -1306,7 +1309,7 @@

We get:

![Image 9-2](../images/img-1-09-2.jpg)
![Image 10-2](../images/img-1-10-2.jpg)

(The reader Becker has pointed out that when there is a reflection ray the function returns `false`
so there are no reflections. He is right, and that is why there are none in the image above. I
Expand Down Expand Up @@ -1388,7 +1391,7 @@

This gives:

![Image 9-3](../images/img-1-09-3.jpg)
![Image 10-3](../images/img-1-10-3.jpg)



Expand All @@ -1404,7 +1407,7 @@
I first keep the rays coming from the origin and heading to the $z = -1$ plane. We could make it the
$z = -2$ plane, or whatever, as long as we made $h$ a ratio to that distance. Here is our setup:

![Figure 10-1](../images/fig-1-10-1.jpg)
![Figure 11-1](../images/fig-1-11-1.jpg)

This implies $h = tan(\theta/2)$. Our camera now becomes:

Expand Down Expand Up @@ -1449,7 +1452,7 @@

gives:

![Image 10-1](../images/img-1-10-1.jpg)
![Image 11-1](../images/img-1-11-1.jpg)

To get an arbitrary viewpoint, let’s first name the points we care about. We’ll call the position
where we place the camera _lookfrom_, and the point we look at _lookat_. (Later, if you want, you
Expand All @@ -1461,13 +1464,13 @@
vector for the camera. Notice we already we already have a plane that the up vector should be in,
the plane orthogonal to the view direction.

![Figure 10-2](../images/fig-1-10-2.jpg)
![Figure 11-2](../images/fig-1-11-2.jpg)

We can actually use any up vector we want, and simply project it onto this plane to get an up vector
for the camera. I use the common convention of naming a “view up” (_vup_) vector. A couple of cross
products, and we now have a complete orthonormal basis (u,v,w) to describe our camera’s orientation.

![Figure 10-3](../images/fig-1-10-3.jpg)
![Figure 11-3](../images/fig-1-11-3.jpg)

Remember that `vup`, `v`, and `w` are all in the same plane. Note that, like before when our fixed
camera faced -Z, our arbitrary view camera faces -w. And keep in mind that we can -- but we don’t
Expand Down Expand Up @@ -1517,11 +1520,11 @@

to get:

![Image 10-2](../images/img-1-10-2.jpg)
![Image 11-2](../images/img-1-11-2.jpg)

And we can change field of view to get:

![Image 10-3](../images/img-1-10-3.jpg)
![Image 11-3](../images/img-1-11-3.jpg)



Expand All @@ -1546,14 +1549,14 @@
computed (the image is projected upside down on the film). Graphics people usually use a thin lens
approximation.

![Figure 11-1](../images/fig-1-11-1.jpg)
![Figure 12-1](../images/fig-1-12-1.jpg)

We also don’t need to simulate any of the inside of the camera. For the purposes of rendering an
image outside the camera, that would be unnecessary complexity. Instead I usually start rays from
the surface of the lens, and send them toward a virtual film plane, by finding the projection of the
film on the plane that is in focus (at the distance `focus_dist`).

![Figure 11-2](../images/fig-1-11-2.jpg)
![Figure 12-2](../images/fig-1-12-2.jpg)

For that we just need to have the ray origins be on a disk around `lookfrom` rather than from a
point:
Expand Down Expand Up @@ -1625,7 +1628,7 @@

We get:

![Image 11-1](../images/img-1-11-1.jpg)
![Image 12-1](../images/img-1-12-1.jpg)



Expand Down Expand Up @@ -1677,7 +1680,7 @@

This gives:

![Image 12-1](../images/img-1-12-1.jpg)
![Image 13-1](../images/img-1-13-1.jpg)

An interesting thing you might note is the glass balls don’t really have shadows which makes them
look like they are floating. This is not a bug (you don’t see glass balls much in real life, where
Expand Down
Loading