这是用户在 2024-9-3 23:31 为 https://raytracing.github.io/books/RayTracingInOneWeekend.html 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

一个周末的光线追踪


一个周末的光线追踪

彼得·雪莉特雷弗·大卫·布莱克史蒂夫·霍拉施


版本 4.0.1、2024-08-31


版权所有 2018-2024 Peter Shirley。保留所有权利。

 内容

(Top)

 1 概述


阿拉伯数字  输出图像

  
2.1  PPM 图像格式

  
2.2 元创建图像文件

  
2.3  添加进度指示器


 3 vec3 类

  
3.1  颜色工具函数


 4 光线、简单相机和背景

  
4.1  射线类

  
4.2  将光线发送到场景中


 5 添加球体

  
5,1  射线-球体交集

  
5,2  创建我们的第一个光线追踪图像


 6 曲面法线和多个对象

  
6,1  使用表面法线进行着色

  
6,2  简化射线-球体交集代码

  
6,3  Hittable 对象的抽象

  
6,4  正面与背面

  
6,5  可点击对象列表

  
6,6 %一些新的 C++ 功能

  
6,7  常用常量和效用函数

  
6,8  Interval 类


 7 将相机代码移动到其自己的类中


 8 抗锯齿

  
8,1  一些随机数实用程序

  
8,2  生成具有多个样本的像素


 9 漫反射材质

  
9,1  简单的漫反射材质

  
9,2  限制子光线的数量

  
9,3  修复阴影痤疮

  
9,4  真正的朗伯反射

  
9,5 %使用 Gamma 校正实现准确的颜色强度


10  金属

  
10,1  材质的抽象类

  
10.2  描述光线-对象交集的数据结构

  
10,3  对 Light Scatter 和 Reflectance 进行建模

  
10,4  Mirrored Light Reflection

  
10,5  带有金属球的场景

  
10,6  模糊反射


11  介质

  
11.1  折射

  
11,2 元斯涅尔定律

  
11.3  全内反射

  
11.4 元Schlick 近似

  
11,5  为空心玻璃球建模


12  可定位摄像头

  
12,1  摄像机查看几何体

  
12,2  定位和调整摄像机的方向

13  Defocus Blur
  
13,1  薄透镜近似

  
13,2  生成采样光线


14  下一步在哪里?

  
14,1  最终渲染

  
14,2  后续步骤

    
14.2.1  第 2 册:光线追踪:下周

    
14.2.2  第 3 册:光线追踪:您的余生

    
14.2.3  其他方向


15  确认


16  引用本书

  
16,1  基本数据

  
16,2  片段

    16.2.1  Markdown
    
16.2.2  [HTML全文]

    
16.2.3  LaTeX 和 BibTex

    16.2.4  BibLaTeX
    
16.2.5  IEEE的

    
16.2.6  MLA:

   

 概述


多年来,我教过很多图形课程。我经常在光线追踪中执行这些操作,因为您被迫编写所有代码,但您仍然可以在没有 API 的情况下获得很酷的图像。我决定将我的课程笔记改编成操作方法,以便您尽快进入一个很酷的程序。它不会是一个功能齐全的光线追踪器,但它确实具有间接照明,这使得光线追踪成为电影中的主打产品。按照这些步骤操作,如果您感到兴奋并想要追求它,您生成的光线追踪器的架构将非常适合扩展到更广泛的光线追踪器。


当有人说“光线追踪”时,它可能意味着很多事情。从技术上讲,我将要描述的是一个路径追踪器,而且是一个相当通用的追踪器。虽然代码非常简单(让计算机完成工作!我想你会对你能制作的图像感到非常满意。


我将按照我编写的顺序引导您编写光线跟踪器,并提供一些调试技巧。到最后,您将拥有一个光线跟踪器,它可以生成一些出色的图像。您应该能够在周末完成此操作。如果你花更长的时间,不用担心。我使用 C++ 作为驱动语言,但您不需要这样做。但是,我建议您这样做,因为它快速、可移植,并且大多数生产电影和视频游戏渲染器都是用 C++ 编写的。请注意,我避免了 C++ 的大多数“现代功能”,但继承和运算符重载对于光线跟踪器来说太有用了,无法传递。


我没有在线提供代码,但代码是真实的,除了 vec3 类中的一些简单运算符外,我展示了所有代码。我非常相信输入代码来学习它,但是当代码可用时,我就会使用它,所以我只在代码不可用时才实践我所宣扬的。所以不要问!


我把最后一部分留了下来,因为我做了 180 次,这很有趣。一些读者最终会遇到细微的错误,当我们比较代码时,这些错误会有所帮助。因此,请键入代码,但您可以在 GitHub 上的 RayTracing 项目中找到每本书的完成源代码。


关于这些书籍的实现代码的说明 — 我们对包含代码的理念优先考虑以下目标:


因此,该代码提供了一个基线实现,并有大量的改进供读者享受。有无数种方法可以优化和现代化代码;我们优先考虑简单的解决方案。


我们假设对向量(如点积和向量加法)有一点熟悉。如果您不知道这一点,请做一些回顾。如果您需要该评论,或者是第一次学习它,请查看 Morgan McGuire 的在线 Graphics Codex、Steve Marschner 和 Peter Shirley 的 Fundamentals of Computer Graphics 或 J.D. Foley 和 Andy Van Dam 的 Computer Graphics: Principles and Practice


查看项目README文件,了解此项目、GitHub上的存储库、目录结构、构建和运行,以及如何进行或引用更正和贡献的信息。


有关其他项目相关资源,请参阅我们的延伸阅读 wiki 页面


这些书籍已经过格式化,可以直接从您的浏览器打印出来。我们还在“资源”部分包含每本书的 PDF。


如果您想与我们沟通,请随时发送电子邮件至:


最后,如果您在实施中遇到问题、有一般问题或想要分享自己的想法或工作,请参阅 GitHub 项目上的 GitHub Discussions 论坛


感谢所有在这个项目中伸出援手的人。您可以在本书末尾的致谢部分找到它们。


让我们继续吧!

   

 输出图像

   

 PPM 图像格式


无论何时启动渲染器,都需要一种方法来查看图像。最直接的方法是将其写入文件。问题是,格式太多了。其中许多都很复杂。我总是从纯文本 ppm 文件开始。这是来自维基百科的一个很好的描述:

 

图 1:PPM 示例


让我们编写一些 C++ 代码来输出这样的东西:

 
#include <iostream>

int main() {

    // Image

    int image_width = 256;
    int image_height = 256;

    // Render

    std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";

    for (int j = 0; j < image_height; j++) {
        for (int i = 0; i < image_width; i++) {
            auto r = double(i) / (image_width-1);
            auto g = double(j) / (image_height-1);
            auto b = 0.0;

            int ir = int(255.999 * r);
            int ig = int(255.999 * g);
            int ib = int(255.999 * b);

            std::cout << ir << ' ' << ig << ' ' << ib << '\n';
        }
    }
}

清单 1: [main.cc] 创建您的第一个映像


此代码中需要注意一些事项:


  1. 像素以行的形式写出。


  2. 每行像素都是从左到右写出的。


  3. 这些行从上到下写出。


  4. 按照惯例,每个红/绿/蓝分量在内部都由范围从 0.0 到 1.0 的实值变量表示。在我们打印它们之前,必须将它们缩放为 0 到 255 之间的整数值。


  5. 红色从左到右从完全关闭(黑色)变为完全亮起(亮红色),绿色从顶部完全关闭(黑色)到底部完全亮起(亮绿色)。将红色和绿色光一起添加会变成黄色,因此我们应该期望右下角为黄色。

   

 创建图像文件


由于文件已写入标准输出流,因此您需要将其重定向到图像文件。通常,这是使用 > 重定向运算符从命令行完成的。


在 Windows 上,您将从运行以下命令的 CMake 获取调试版本:

cmake -B build
cmake --build build


然后像这样运行你新构建的程序:

build\Debug\inOneWeekend.exe > image.ppm


稍后,最好运行优化的构建以提高速度。在这种情况下,您将像这样构建:

cmake --build build --config release


并将运行优化的程序,如下所示:

build\Release\inOneWeekend.exe > image.ppm


上面的示例假定你正在使用 CMake 进行构建,使用与包含的源中的 CMakeLists.txt 文件相同的方法。使用您最熟悉的任何构建环境(和语言)。


在 Mac 或 Linux 上,发布版本,您可以像这样启动程序:

build/inOneWeekend > image.ppm


完整的构建和运行说明可以在 README 项目中找到。


打开输出文件(在我的 Mac 上的 ToyViewer 中,但如果你的查看器不支持,请在你最喜欢的图像查看器和 Google“ppm 查看器”中尝试)显示以下结果:


图 1:第一个 PPM 图像


万岁!这是图形“你好世界”。如果您的图像看起来不是那样的,请在文本编辑器中打开输出文件并查看它的外观。它应该像这样开始:

 
P3
256 256
255
0 0 0
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
10 0 0
11 0 0
12 0 0
...

清单 2:第一个图像输出


如果您的 PPM 文件不是这样,请仔细检查您的格式设置代码。如果它确实看起来像这样但无法渲染,则可能存在行尾差异或类似内容,这会使图像查看器感到困惑。为了帮助调试此问题,您可以在 Github 项目的 images 目录中找到一个文件 test.ppm。这应该有助于确保您的查看器能够处理 PPM 格式,并用作与生成的 PPM 文件的比较。


一些读者报告在 Windows 上查看他们生成的文件时出现问题。在这种情况下,问题通常是 PPM 以 UTF-16 形式写出,通常是从 PowerShell 写出的。如果您遇到此问题,请参阅讨论 1114 以获取有关此问题的帮助。


如果所有内容都显示正确,那么您就差不多解决了系统和 IDE 问题 — 本系列其余部分的所有内容都使用这种相同的简单机制来生成渲染图像。


如果您想生成其他图像格式,我是 stb_image.h 的粉丝,这是一个仅标题图像库,可在 GitHub 上获得,网址为 https://github.com/nothings/stb

   


添加进度指示器


在继续之前,让我们在输出中添加一个进度指示器。这是跟踪长时间渲染进度的便捷方法,还可以识别由于无限循环或其他问题而停止的运行。


我们的程序将图像输出到标准输出流 (std::cout),所以不要理会它,并且 而是写入日志记录输出流 (std::clog):

 
    for (int j = 0; j < image_height; ++j) {
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
for (int i = 0; i < image_width; i++) { auto r = double(i) / (image_width-1); auto g = double(j) / (image_height-1); auto b = 0.0; int ir = int(255.999 * r); int ig = int(255.999 * g); int ib = int(255.999 * b); std::cout << ir << ' ' << ig << ' ' << ib << '\n'; } }
std::clog << "\rDone. \n";

清单 3: [main.cc] 带有进度报告的主渲染循环


现在,在运行时,您将看到剩余扫描线数量的运行计数。希望它运行得如此之快,以至于您甚至看不到它!别担心,您将来会有很多时间来观看我们扩展光线追踪器时缓慢更新的进度线。

   

 vec3 类


几乎所有的图形程序都有一些用于存储几何矢量和颜色的类。在许多系统中,这些矢量是 4D(3D 位置加上几何图形的齐次坐标,或 RGB 加上颜色的 alpha 透明度分量)。对于我们的目的,三个坐标就足够了。我们将对颜色、位置、方向、偏移量等使用相同的类 vec3。有些人不喜欢这样做,因为它不会阻止你做一些愚蠢的事情,比如从颜色中减去一个位置。他们有很好的观点,但是如果不是明显错误,我们将始终采取“减少代码”的路线。尽管如此,我们还是为 vec3 声明了两个别名:point3color。由于这两种类型只是 vec3 的别名,因此如果您将颜色传递给需要 point3 的函数,则不会收到警告,并且没有什么可以阻止您将 point3 添加到颜色中,但它使代码更易于阅读和理解。


我们在新的 vec3.h 头文件的上半部分定义 vec3 类,并在下半部分定义一组有用的 vector utility 函数:

 
#ifndef VEC3_H
#define VEC3_H

#include <cmath>
#include <iostream>

class vec3 {
  public:
    double e[3];

    vec3() : e{0,0,0} {}
    vec3(double e0, double e1, double e2) : e{e0, e1, e2} {}

    double x() const { return e[0]; }
    double y() const { return e[1]; }
    double z() const { return e[2]; }

    vec3 operator-() const { return vec3(-e[0], -e[1], -e[2]); }
    double operator[](int i) const { return e[i]; }
    double& operator[](int i) { return e[i]; }

    vec3& operator+=(const vec3& v) {
        e[0] += v.e[0];
        e[1] += v.e[1];
        e[2] += v.e[2];
        return *this;
    }

    vec3& operator*=(double t) {
        e[0] *= t;
        e[1] *= t;
        e[2] *= t;
        return *this;
    }

    vec3& operator/=(double t) {
        return *this *= 1/t;
    }

    double length() const {
        return std::sqrt(length_squared());
    }

    double length_squared() const {
        return e[0]*e[0] + e[1]*e[1] + e[2]*e[2];
    }
};

// point3 is just an alias for vec3, but useful for geometric clarity in the code.
using point3 = vec3;


// Vector Utility Functions

inline std::ostream& operator<<(std::ostream& out, const vec3& v) {
    return out << v.e[0] << ' ' << v.e[1] << ' ' << v.e[2];
}

inline vec3 operator+(const vec3& u, const vec3& v) {
    return vec3(u.e[0] + v.e[0], u.e[1] + v.e[1], u.e[2] + v.e[2]);
}

inline vec3 operator-(const vec3& u, const vec3& v) {
    return vec3(u.e[0] - v.e[0], u.e[1] - v.e[1], u.e[2] - v.e[2]);
}

inline vec3 operator*(const vec3& u, const vec3& v) {
    return vec3(u.e[0] * v.e[0], u.e[1] * v.e[1], u.e[2] * v.e[2]);
}

inline vec3 operator*(double t, const vec3& v) {
    return vec3(t*v.e[0], t*v.e[1], t*v.e[2]);
}

inline vec3 operator*(const vec3& v, double t) {
    return t * v;
}

inline vec3 operator/(const vec3& v, double t) {
    return (1/t) * v;
}

inline double dot(const vec3& u, const vec3& v) {
    return u.e[0] * v.e[0]
         + u.e[1] * v.e[1]
         + u.e[2] * v.e[2];
}

inline vec3 cross(const vec3& u, const vec3& v) {
    return vec3(u.e[1] * v.e[2] - u.e[2] * v.e[1],
                u.e[2] * v.e[0] - u.e[0] * v.e[2],
                u.e[0] * v.e[1] - u.e[1] * v.e[0]);
}

inline vec3 unit_vector(const vec3& v) {
    return v / v.length();
}

#endif

清单 4: [vec3.h] vec3 定义和辅助函数


我们在这里使用 double,但一些光线跟踪器使用 floatdouble 具有更高的精度和范围,但与 float 相比,其大小是 float 的两倍。如果您在有限的内存条件(如硬件着色器)中进行编程,则这种大小的增加可能很重要。任何一个都可以 - 遵循你自己的口味。

   

 颜色工具函数


使用新的 vec3 类,我们将创建一个新的 color.h 头文件,并定义一个实用程序函数,用于将单个像素的颜色写入标准输出流。

 
#ifndef COLOR_H
#define COLOR_H

#include "vec3.h"

#include <iostream>

using color = vec3;

void write_color(std::ostream& out, const color& pixel_color) {
    auto r = pixel_color.x();
    auto g = pixel_color.y();
    auto b = pixel_color.z();

    // Translate the [0,1] component values to the byte range [0,255].
    int rbyte = int(255.999 * r);
    int gbyte = int(255.999 * g);
    int bbyte = int(255.999 * b);

    // Write out the pixel color components.
    out << rbyte << ' ' << gbyte << ' ' << bbyte << '\n';
}

#endif

清单 5: [color.h] 颜色实用程序函数


现在我们可以将 main 更改为使用这两个:

 
#include "color.h" #include "vec3.h"
#include <iostream> int main() { // Image int image_width = 256; int image_height = 256; // Render std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n"; for (int j = 0; j < image_height; j++) { std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush; for (int i = 0; i < image_width; i++) {
auto pixel_color = color(double(i)/(image_width-1), double(j)/(image_height-1), 0); write_color(std::cout, pixel_color);
} } std::clog << "\rDone. \n"; }

清单 6: [main.cc] 第一个 PPM 图像的最终代码


您应该得到与以前完全相同的图片。

   


光线、简单相机和背景

   

 射线类

The one thing that all ray tracers have is a ray class and a computation of what color is seen along a ray. Let’s think of a ray as a function P(t)=A+tb. Here P is a 3D position along a line in 3D. A is the ray origin and b is the ray direction. The ray parameter t is a real number (double in the code). Plug in a different t and P(t) moves the point along the ray. Add in negative t values and you can go anywhere on the 3D line. For positive t, you get only the parts in front of A, and this is what is often called a half-line or a ray.

 

图 2:线性插值

We can represent the idea of a ray as a class, and represent the function P(t) as a function that we'll call ray::at(t):

 
#ifndef RAY_H
#define RAY_H

#include "vec3.h"

class ray {
  public:
    ray() {}

    ray(const point3& origin, const vec3& direction) : orig(origin), dir(direction) {}

    const point3& origin() const  { return orig; }
    const vec3& direction() const { return dir; }

    point3 at(double t) const {
        return orig + t*dir;
    }

  private:
    point3 orig;
    vec3 dir;
};

#endif

清单 7: [ray.h] ray 类


(对于不熟悉 C++ 的人,函数 ray::origin()ray::d irection() 都返回对其成员的不可变引用。调用方可以直接使用引用,也可以根据需要创建可变副本。

   


将光线发送到场景中


现在我们准备好转折并制作光线跟踪器。光线跟踪器的核心是发送穿过像素的光线,并计算在这些光线方向上看到的颜色。涉及的步骤包括


  1. 计算从“眼睛”穿过像素的光线,

  2. 确定光线与哪些对象相交,以及

  3. 计算最近交点的颜色。


首次开发光线跟踪器时,我总是使用一个简单的相机来启动和运行代码。

I’ve often gotten into trouble using square images for debugging because I transpose x and y too often, so we’ll use a non-square image. A square image has a 1∶1 aspect ratio, because its width is the same as its height. Since we want a non-square image, we'll choose 16∶9 because it's so common. A 16∶9 aspect ratio means that the ratio of image width to image height is 16∶9. Put another way, given an image with a 16∶9 aspect ratio,

width/height=16/9=1.7778


举个实际例子,一个 800 像素宽 x 400 像素高的图像的长宽比为 2∶1。


图像的纵横比可以通过其宽度与高度的比率来确定。但是,由于我们考虑了给定的纵横比,因此设置图像的宽度和纵横比,然后使用它来计算其高度会更容易。这样,我们可以通过更改图像宽度来放大或缩小图像,并且不会破坏我们想要的纵横比。我们必须确保在求解图像高度时,得到的高度至少为 1。


除了为渲染图像设置像素尺寸外,我们还需要设置一个虚拟视口,通过该视口传递我们的场景光线。视区是 3D 世界中的虚拟矩形,其中包含图像像素位置的网格。如果像素的水平间距与垂直间距相同,则边界像素的视口将具有与渲染图像相同的纵横比。两个相邻像素之间的距离称为像素间距,方形像素是标准。


首先,我们将选择任意的视口高度 2.0,并缩放视口宽度以提供所需的纵横比。以下是此代码的代码片段:

 
auto aspect_ratio = 16.0 / 9.0;
int image_width = 400;

// Calculate the image height, and ensure that it's at least 1.
int image_height = int(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;

// Viewport widths less than one are ok since they are real valued.
auto viewport_height = 2.0;
auto viewport_width = viewport_height * (double(image_width)/image_height);

清单 8:渲染图像设置


如果你想知道为什么我们在计算 viewport_width 时不只使用 aspect_ratio,这是因为设置为 aspect_ratio 的值是理想的比率,它可能不是 image_widthimage_height 之间的实际比率。如果允许 image_height 是实值 — 而不仅仅是一个整数 — 那么使用 aspect_ratio 就可以了。但是 image_widthimage_height 之间的实际比率可能因代码的两个部分而异。首先,image_height 向下舍入到最接近的整数,这可以提高比率。其次,我们不允许image_height小于 1,这也会改变实际的纵横比。


请注意,aspect_ratio 是一个理想的比率,我们尽可能地使用基于整数的图像宽度与图像高度的比率来近似。为了使视口比例与图像比例完全匹配,我们使用计算出的图像纵横比来确定最终的视口宽度。


接下来,我们将定义摄像机中心:3D 空间中所有场景光线都源自的点(这通常也称为眼点)。从摄像机中心到视区中心的矢量将与视区正交。我们最初将视区和摄像机中心点之间的距离设置为一个单位。此距离通常称为焦距

For simplicity we'll start with the camera center at (0,0,0). We'll also have the y-axis go up, the x-axis to the right, and the negative z-axis pointing in the viewing direction. (This is commonly referred to as right-handed coordinates.)

 

图 3:相机几何图形


现在是不可避免的棘手部分。虽然我们的 3D 空间具有上述约定,但这与我们的图像坐标相冲突,我们希望将第 0 个像素放在左上角,然后向下移动到右下角的最后一个像素。这意味着我们的图像坐标 Y 轴是倒置的:Y 轴沿着图像向下增加。

As we scan our image, we will start at the upper left pixel (pixel 0,0), scan left-to-right across each row, and then scan row-by-row, top-to-bottom. To help navigate the pixel grid, we'll use a vector from the left edge to the right edge (Vu), and a vector from the upper edge to the lower edge (Vv).

Our pixel grid will be inset from the viewport edges by half the pixel-to-pixel distance. This way, our viewport area is evenly divided into width × height identical regions. Here's what our viewport and pixel grid look like:

 
Figure 4: Viewport and pixel grid

In this figure, we have the viewport, the pixel grid for a 7×5 resolution image, the viewport upper left corner Q, the pixel P0,0 location, the viewport vector Vu (viewport_u), the viewport vector Vv (viewport_v), and the pixel delta vectors Δu and Δv.


综上所述,以下是实现相机的代码。我们将在函数中存根 ray_color(const ray& r) 返回给定场景光线的颜色 - 我们将始终设置为 暂时返回黑色。

 
#include "color.h"
#include "ray.h"
#include "vec3.h" #include <iostream>
color ray_color(const ray& r) { return color(0,0,0); }
int main() { // Image
auto aspect_ratio = 16.0 / 9.0; int image_width = 400; // Calculate the image height, and ensure that it's at least 1. int image_height = int(image_width / aspect_ratio); image_height = (image_height < 1) ? 1 : image_height; // Camera auto focal_length = 1.0; auto viewport_height = 2.0; auto viewport_width = viewport_height * (double(image_width)/image_height); auto camera_center = point3(0, 0, 0); // Calculate the vectors across the horizontal and down the vertical viewport edges. auto viewport_u = vec3(viewport_width, 0, 0); auto viewport_v = vec3(0, -viewport_height, 0); // Calculate the horizontal and vertical delta vectors from pixel to pixel. auto pixel_delta_u = viewport_u / image_width; auto pixel_delta_v = viewport_v / image_height; // Calculate the location of the upper left pixel. auto viewport_upper_left = camera_center - vec3(0, 0, focal_length) - viewport_u/2 - viewport_v/2; auto pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
// Render std::cout << "P3\n" << image_width << " " << image_height << "\n255\n"; for (int j = 0; j < image_height; j++) { std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush; for (int i = 0; i < image_width; i++) {
auto pixel_center = pixel00_loc + (i * pixel_delta_u) + (j * pixel_delta_v); auto ray_direction = pixel_center - camera_center; ray r(camera_center, ray_direction); color pixel_color = ray_color(r);
write_color(std::cout, pixel_color); } } std::clog << "\rDone. \n"; }

清单 9: [main.cc] 创建场景光线

Notice that in the code above, I didn't make ray_direction a unit vector, because I think not doing that makes for simpler and slightly faster code.

Now we'll fill in the ray_color(ray) function to implement a simple gradient. This function will linearly blend white and blue depending on the height of the y coordinate after scaling the ray direction to unit length (so 1.0<y<1.0). Because we're looking at the y height after normalizing the vector, you'll notice a horizontal gradient to the color in addition to the vertical gradient.

I'll use a standard graphics trick to linearly scale 0.0a1.0. When a=1.0, I want blue. When a=0.0, I want white. In between, I want a blend. This forms a “linear blend”, or “linear interpolation”. This is commonly referred to as a lerp between two values. A lerp is always of the form

blendedValue=(1a)startValue+aendValue,

with a going from zero to one.

Putting all this together, here's what we get:

 
#include "color.h"
#include "ray.h"
#include "vec3.h"

#include <iostream>


color ray_color(const ray& r) {
vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
} ...
Listing 10: [main.cc] Rendering a blue-to-white gradient

In our case this produces:

Image 2: A blue-to-white gradient depending on ray Y coordinate

   

Adding a Sphere

Let’s add a single object to our ray tracer. People often use spheres in ray tracers because calculating whether a ray hits a sphere is relatively simple.

   

Ray-Sphere Intersection

The equation for a sphere of radius r that is centered at the origin is an important mathematical equation:

x2+y2+z2=r2

You can also think of this as saying that if a given point (x,y,z) is on the surface of the sphere, then x2+y2+z2=r2. If a given point (x,y,z) is inside the sphere, then x2+y2+z2<r2, and if a given point (x,y,z) is outside the sphere, then x2+y2+z2>r2.

If we want to allow the sphere center to be at an arbitrary point (Cx,Cy,Cz), then the equation becomes a lot less nice:

(Cxx)2+(Cyy)2+(Czz)2=r2

In graphics, you almost always want your formulas to be in terms of vectors so that all the x/y/z stuff can be simply represented using a vec3 class. You might note that the vector from point P=(x,y,z) to center C=(Cx,Cy,Cz) is (CP).

If we use the definition of the dot product:

(CP)(CP)=(Cxx)2+(Cyy)2+(Czz)2

Then we can rewrite the equation of the sphere in vector form as:

(CP)(CP)=r2

We can read this as “any point P that satisfies this equation is on the sphere”. We want to know if our ray P(t)=Q+td ever hits the sphere anywhere. If it does hit the sphere, there is some t for which P(t) satisfies the sphere equation. So we are looking for any t where this is true:

(CP(t))(CP(t))=r2

which can be found by replacing P(t) with its expanded form:

(C(Q+td))(C(Q+td))=r2

We have three vectors on the left dotted by three vectors on the right. If we solved for the full dot product we would get nine vectors. You can definitely go through and write everything out, but we don't need to work that hard. If you remember, we want to solve for t, so we'll separate the terms based on whether there is a t or not:

(td+(CQ))(td+(CQ))=r2

And now we follow the rules of vector algebra to distribute the dot product:

t2dd2td(CQ)+(CQ)(CQ)=r2

Move the square of the radius over to the left hand side:

t2dd2td(CQ)+(CQ)(CQ)r2=0

It's hard to make out what exactly this equation is, but the vectors and r in that equation are all constant and known. Furthermore, the only vectors that we have are reduced to scalars by dot product. The only unknown is t, and we have a t2, which means that this equation is quadratic. You can solve for a quadratic equation ax2+bx+c=0 by using the quadratic formula:

b±b24ac2a

So solving for t in the ray-sphere intersection equation gives us these values for a, b, and c:

a=dd
b=2d(CQ)
c=(CQ)(CQ)r2

Using all of the above you can solve for t, but there is a square root part that can be either positive (meaning two real solutions), negative (meaning no real solutions), or zero (meaning one real solution). In graphics, the algebra almost always relates very directly to the geometry. What we have is:

 
Figure 5: Ray-sphere intersection results

   

Creating Our First Raytraced Image

If we take that math and hard-code it into our program, we can test our code by placing a small sphere at −1 on the z-axis and then coloring red any pixel that intersects it.

 
bool hit_sphere(const point3& center, double radius, const ray& r) { vec3 oc = center - r.origin(); auto a = dot(r.direction(), r.direction()); auto b = -2.0 * dot(r.direction(), oc); auto c = dot(oc, oc) - radius*radius; auto discriminant = b*b - 4*a*c; return (discriminant >= 0); }
color ray_color(const ray& r) {
if (hit_sphere(point3(0,0,-1), 0.5, r)) return color(1, 0, 0);
vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); }
Listing 11: [main.cc] Rendering a red sphere

What we get is this:

Image 3: A simple red sphere

Now this lacks all sorts of things — like shading, reflection rays, and more than one object — but we are closer to halfway done than we are to our start! One thing to be aware of is that we are testing to see if a ray intersects with the sphere by solving the quadratic equation and seeing if a solution exists, but solutions with negative values of t work just fine. If you change your sphere center to z=+1 you will get exactly the same picture because this solution doesn't distinguish between objects in front of the camera and objects behind the camera. This is not a feature! We’ll fix those issues next.

   

Surface Normals and Multiple Objects

   

Shading with Surface Normals

First, let’s get ourselves a surface normal so we can shade. This is a vector that is perpendicular to the surface at the point of intersection.

We have a key design decision to make for normal vectors in our code: whether normal vectors will have an arbitrary length, or will be normalized to unit length.

It is tempting to skip the expensive square root operation involved in normalizing the vector, in case it's not needed. In practice, however, there are three important observations. First, if a unit-length normal vector is ever required, then you might as well do it up front once, instead of over and over again “just in case” for every location where unit-length is required. Second, we do require unit-length normal vectors in several places. Third, if you require normal vectors to be unit length, then you can often efficiently generate that vector with an understanding of the specific geometry class, in its constructor, or in the hit() function. For example, sphere normals can be made unit length simply by dividing by the sphere radius, avoiding the square root entirely.

Given all of this, we will adopt the policy that all normal vectors will be of unit length.

For a sphere, the outward normal is in the direction of the hit point minus the center:

 
Figure 6: Sphere surface-normal geometry

On the earth, this means that the vector from the earth’s center to you points straight up. Let’s throw that into the code now, and shade it. We don’t have any lights or anything yet, so let’s just visualize the normals with a color map. A common trick used for visualizing normals (because it’s easy and somewhat intuitive to assume n is a unit length vector — so each component is between −1 and 1) is to map each component to the interval from 0 to 1, and then map (x,y,z) to (red,green,blue). For the normal, we need the hit point, not just whether we hit or not (which is all we're calculating at the moment). We only have one sphere in the scene, and it's directly in front of the camera, so we won't worry about negative values of t yet. We'll just assume the closest hit point (smallest t) is the one that we want. These changes in the code let us compute and visualize n:

 
double hit_sphere(const point3& center, double radius, const ray& r) {
vec3 oc = center - r.origin(); auto a = dot(r.direction(), r.direction()); auto b = -2.0 * dot(r.direction(), oc); auto c = dot(oc, oc) - radius*radius; auto discriminant = b*b - 4*a*c;
if (discriminant < 0) { return -1.0; } else { return (-b - std::sqrt(discriminant) ) / (2.0*a); }
} color ray_color(const ray& r) {
auto t = hit_sphere(point3(0,0,-1), 0.5, r); if (t > 0.0) { vec3 N = unit_vector(r.at(t) - vec3(0,0,-1)); return 0.5*color(N.x()+1, N.y()+1, N.z()+1); }
vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); }
Listing 12: [main.cc] Rendering surface normals on a sphere

And that yields this picture:

Image 4: A sphere colored according to its normal vectors

   

Simplifying the Ray-Sphere Intersection Code

Let’s revisit the ray-sphere function:

 
double hit_sphere(const point3& center, double radius, const ray& r) {
    vec3 oc = center - r.origin();
    auto a = dot(r.direction(), r.direction());
    auto b = -2.0 * dot(r.direction(), oc);
    auto c = dot(oc, oc) - radius*radius;
    auto discriminant = b*b - 4*a*c;

    if (discriminant < 0) {
        return -1.0;
    } else {
        return (-b - std::sqrt(discriminant) ) / (2.0*a);
    }
}
Listing 13: [main.cc] Ray-sphere intersection code (before)

First, recall that a vector dotted with itself is equal to the squared length of that vector.

Second, notice how the equation for b has a factor of negative two in it. Consider what happens to the quadratic equation if b=2h:

b±b24ac2a

=(2h)±(2h)24ac2a

=2h±2h2ac2a

=h±h2aca

This simplifies nicely, so we'll use it. So solving for h:

b=2d(CQ)
b=2h
h=b2=d(CQ)


利用这些观察结果,我们现在可以将 sphere-intersection 代码简化为:

 
double hit_sphere(const point3& center, double radius, const ray& r) {
    vec3 oc = center - r.origin();
auto a = r.direction().length_squared(); auto h = dot(r.direction(), oc); auto c = oc.length_squared() - radius*radius; auto discriminant = h*h - a*c;
if (discriminant < 0) { return -1.0; } else {
return (h - std::sqrt(discriminant)) / a;
} }

清单 14: [main.cc] Ray-sphere 交集代码(之后)

   

An Abstraction for Hittable Objects

Now, how about more than one sphere? While it is tempting to have an array of spheres, a very clean solution is to make an “abstract class” for anything a ray might hit, and make both a sphere and a list of spheres just something that can be hit. What that class should be called is something of a quandary — calling it an “object” would be good if not for “object oriented” programming. “Surface” is often used, with the weakness being maybe we will want volumes (fog, clouds, stuff like that). “hittable” emphasizes the member function that unites them. I don’t love any of these, but we'll go with “hittable”.

This hittable abstract class will have a hit function that takes in a ray. Most ray tracers have found it convenient to add a valid interval for hits tmin to tmax, so the hit only “counts” if tmin<t<tmax. For the initial rays this is positive t, but as we will see, it can simplify our code to have an interval tmin to tmax. One design question is whether to do things like compute the normal if we hit something. We might end up hitting something closer as we do our search, and we will only need the normal of the closest thing. I will go with the simple solution and compute a bundle of stuff I will store in some structure. Here’s the abstract class:

 
#ifndef HITTABLE_H
#define HITTABLE_H

#include "ray.h"

class hit_record {
  public:
    point3 p;
    vec3 normal;
    double t;
};

class hittable {
  public:
    virtual ~hittable() = default;

    virtual bool hit(const ray& r, double ray_tmin, double ray_tmax, hit_record& rec) const = 0;
};

#endif
Listing 15: [hittable.h] The hittable class

 这是球体:

 
#ifndef SPHERE_H
#define SPHERE_H

#include "hittable.h"
#include "vec3.h"

class sphere : public hittable {
  public:
    sphere(const point3& center, double radius) : center(center), radius(std::fmax(0,radius)) {}

    bool hit(const ray& r, double ray_tmin, double ray_tmax, hit_record& rec) const override {
        vec3 oc = center - r.origin();
        auto a = r.direction().length_squared();
        auto h = dot(r.direction(), oc);
        auto c = oc.length_squared() - radius*radius;

        auto discriminant = h*h - a*c;
        if (discriminant < 0)
            return false;

        auto sqrtd = std::sqrt(discriminant);

        // Find the nearest root that lies in the acceptable range.
        auto root = (h - sqrtd) / a;
        if (root <= ray_tmin || ray_tmax <= root) {
            root = (h + sqrtd) / a;
            if (root <= ray_tmin || ray_tmax <= root)
                return false;
        }

        rec.t = root;
        rec.p = r.at(rec.t);
        rec.normal = (rec.p - center) / radius;

        return true;
    }

  private:
    point3 center;
    double radius;
};

#endif

清单 16: [sphere.h] sphere 类


(请注意,此处我们使用 C++ 标准函数 std::fmax(),该函数返回两个浮点参数中的最大值。同样,我们稍后将使用 std::fmin(),它返回两个浮点参数中的最小值。

   


正面与背面


法线的第二个设计决策是它们是否应该始终指出。目前,找到的法线将始终在中心到交点的方向上 (法线指向外)。如果光线从外部与球体相交,则法线指向光线。如果光线从内部与球体相交,则法线(始终指向外)与光线相交。或者,我们可以让法线始终指向射线。如果光线位于球体外部,则法线将指向外部,但如果光线位于球体内部,则法线将指向内部。

 

图 7:球体表面法线几何体的可能方向


我们需要选择其中一种可能性,因为我们最终需要确定光线来自表面的哪一侧。这对于每侧渲染方式不同的对象(如双面纸上的文本)或具有内部和外部的对象(如玻璃球)非常重要。


如果我们决定让法线始终指向,那么在给光线着色时,我们需要确定光线在哪一侧。我们可以通过将光线与法线进行比较来解决这个问题。如果射线和法线面朝同一方向,则射线位于对象内部,如果射线和法线面朝相反方向,则射线位于对象外部。这可以通过取两个向量的点积来确定,如果它们的点为正,则射线位于球体内。

 
if (dot(ray_direction, outward_normal) > 0.0) {
    // ray is inside the sphere
    ...
} else {
    // ray is outside the sphere
    ...
}

清单 17:比较光线和法线


如果我们决定让法线始终指向光线,我们将无法使用点 product 来确定光线位于曲面的哪一侧。相反,我们需要存储它 信息:

 
bool front_face;
if (dot(ray_direction, outward_normal) > 0.0) {
    // ray is inside the sphere
    normal = -outward_normal;
    front_face = false;
} else {
    // ray is outside the sphere
    normal = outward_normal;
    front_face = true;
}

清单 18:记住表面的一面


我们可以进行设置,使法线始终从表面“向外”指向,或始终指向入射光线。此决定取决于您是要在几何相交时还是在着色时确定曲面的一侧。在这本书中,我们的材质类型比几何体类型多,因此我们将减少工作量,并在几何体时确定。这只是一个偏好问题,您将在文献中看到这两种实现。


我们将 front_face bool 添加到 hit_record 类中。我们还将添加一个函数来为我们解决这个计算:set_face_normal()。为方便起见,我们假设传递给新 set_face_normal() 函数的向量是单位长度。我们总是可以显式地规范化参数,但如果几何代码这样做会更有效,因为当你对特定几何有更多了解时,通常会更容易。

 
class hit_record {
  public:
    point3 p;
    vec3 normal;
    double t;
bool front_face; void set_face_normal(const ray& r, const vec3& outward_normal) { // Sets the hit record normal vector. // NOTE: the parameter `outward_normal` is assumed to have unit length. front_face = dot(r.direction(), outward_normal) < 0; normal = front_face ? outward_normal : -outward_normal; }
};

清单 19: [hittable.h] 向 hit_record 添加正面跟踪


然后我们将 Surface Side 确定添加到 class 中:

 
class sphere : public hittable {
  public:
    ...
    bool hit(const ray& r, double ray_tmin, double ray_tmax, hit_record& rec) const {
        ...

        rec.t = root;
        rec.p = r.at(rec.t);
vec3 outward_normal = (rec.p - center) / radius; rec.set_face_normal(r, outward_normal);
return true; } ... };

清单 20: [sphere.h] 具有法线确定的 sphere 类

   


可点击对象列表


我们有一个称为 hittable 的通用对象,光线可以与之相交。现在,我们添加一个存储可击中列表的类:

 
#ifndef HITTABLE_LIST_H
#define HITTABLE_LIST_H

#include "hittable.h"

#include <memory>
#include <vector>

using std::make_shared;
using std::shared_ptr;

class hittable_list : public hittable {
  public:
    std::vector<shared_ptr<hittable>> objects;

    hittable_list() {}
    hittable_list(shared_ptr<hittable> object) { add(object); }

    void clear() { objects.clear(); }

    void add(shared_ptr<hittable> object) {
        objects.push_back(object);
    }

    bool hit(const ray& r, double ray_tmin, double ray_tmax, hit_record& rec) const override {
        hit_record temp_rec;
        bool hit_anything = false;
        auto closest_so_far = ray_tmax;

        for (const auto& object : objects) {
            if (object->hit(r, ray_tmin, closest_so_far, temp_rec)) {
                hit_anything = true;
                closest_so_far = temp_rec.t;
                rec = temp_rec;
            }
        }

        return hit_anything;
    }
};

#endif
Listing 21: [hittable_list.h] The hittable_list class
   

Some New C++ Features

The hittable_list class code uses some C++ features that may trip you up if you're not normally a C++ programmer: vector, shared_ptr, and make_shared.

shared_ptr<type> is a pointer to some allocated type, with reference-counting semantics. Every time you assign its value to another shared pointer (usually with a simple assignment), the reference count is incremented. As shared pointers go out of scope (like at the end of a block or function), the reference count is decremented. Once the count goes to zero, the object is safely deleted.

Typically, a shared pointer is first initialized with a newly-allocated object, something like this:

 
shared_ptr<double> double_ptr = make_shared<double>(0.37);
shared_ptr<vec3>   vec3_ptr   = make_shared<vec3>(1.414214, 2.718281, 1.618034);
shared_ptr<sphere> sphere_ptr = make_shared<sphere>(point3(0,0,0), 1.0);
Listing 22: An example allocation using shared_ptr

make_shared<thing>(thing_constructor_params ...) allocates a new instance of type thing, using the constructor parameters. It returns a shared_ptr<thing>.

Since the type can be automatically deduced by the return type of make_shared<type>(...), the above lines can be more simply expressed using C++'s auto type specifier:

 
auto double_ptr = make_shared<double>(0.37);
auto vec3_ptr   = make_shared<vec3>(1.414214, 2.718281, 1.618034);
auto sphere_ptr = make_shared<sphere>(point3(0,0,0), 1.0);
Listing 23: An example allocation using shared_ptr with auto type

We'll use shared pointers in our code, because it allows multiple geometries to share a common instance (for example, a bunch of spheres that all use the same color material), and because it makes memory management automatic and easier to reason about.

std::shared_ptr is included with the <memory> header.

The second C++ feature you may be unfamiliar with is std::vector. This is a generic array-like collection of an arbitrary type. Above, we use a collection of pointers to hittable. std::vector automatically grows as more values are added: objects.push_back(object) adds a value to the end of the std::vector member variable objects.

std::vector is included with the <vector> header.


最后,清单 21 中的 using 语句告诉编译器,我们将从 std 库中获取 shared_ptrmake_shared,因此我们不需要在每次引用它们时都用 std::作为前缀。

   


常用常量和效用函数


我们需要一些数学常量,我们可以方便地在它们自己的头文件中定义这些常量。现在我们只需要 infinity,但我们也会把我们自己的 pi 定义放进去,我们稍后会用到。我们还将在此处提供常见的有用常量和未来的实用函数。这个新的头文件 rtweekend.h 将成为我们的通用主头文件。

 
#ifndef RTWEEKEND_H
#define RTWEEKEND_H

#include <cmath>
#include <iostream>
#include <limits>
#include <memory>


// C++ Std Usings

using std::make_shared;
using std::shared_ptr;

// Constants

const double infinity = std::numeric_limits<double>::infinity();
const double pi = 3.1415926535897932385;

// Utility Functions

inline double degrees_to_radians(double degrees) {
    return degrees * pi / 180.0;
}

// Common Headers

#include "color.h"
#include "ray.h"
#include "vec3.h"

#endif

清单 24: [rtweekend.h] rtweekend.h 公共头文件


程序文件将首先包含 rtweekend.h,因此所有其他头文件(我们的大部分代码将驻留在这里)可以隐式地假设已经包含 rtweekend.h。头文件仍需要显式包含任何其他必要的头文件。我们将根据这些假设进行一些更新。

 
#include <iostream>
Listing 25: [color.h] Assume rtweekend.h inclusion for color.h
 
#include "ray.h"
Listing 26: [hittable.h] Assume rtweekend.h inclusion for hittable.h
 
#include <memory>
#include <vector>
using std::make_shared; using std::shared_ptr;

清单 27: [hittable_list.h] 假设 hittable_list.h 包含 rtweekend.h
 
#include "vec3.h"

清单 28: [sphere.h] 假设 sphere.h 包含 rtweekend.h
 
#include <cmath> #include <iostream>

清单 29: [vec3.h] 假设 vec3.h 包含 rtweekend.h


现在是新的主干:

 
#include "rtweekend.h"
#include "color.h" #include "ray.h" #include "vec3.h"
#include "hittable.h" #include "hittable_list.h" #include "sphere.h"
#include <iostream>
double hit_sphere(const point3& center, double radius, const ray& r) { ... }
color ray_color(const ray& r, const hittable& world) { hit_record rec; if (world.hit(r, 0, infinity, rec)) { return 0.5 * (rec.normal + color(1,1,1)); }
vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); } int main() { // Image auto aspect_ratio = 16.0 / 9.0; int image_width = 400; // Calculate the image height, and ensure that it's at least 1. int image_height = int(image_width / aspect_ratio); image_height = (image_height < 1) ? 1 : image_height;
// World hittable_list world; world.add(make_shared<sphere>(point3(0,0,-1), 0.5)); world.add(make_shared<sphere>(point3(0,-100.5,-1), 100));
// Camera auto focal_length = 1.0; auto viewport_height = 2.0; auto viewport_width = viewport_height * (double(image_width)/image_height); auto camera_center = point3(0, 0, 0); // Calculate the vectors across the horizontal and down the vertical viewport edges. auto viewport_u = vec3(viewport_width, 0, 0); auto viewport_v = vec3(0, -viewport_height, 0); // Calculate the horizontal and vertical delta vectors from pixel to pixel. auto pixel_delta_u = viewport_u / image_width; auto pixel_delta_v = viewport_v / image_height; // Calculate the location of the upper left pixel. auto viewport_upper_left = camera_center - vec3(0, 0, focal_length) - viewport_u/2 - viewport_v/2; auto pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v); // Render std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n"; for (int j = 0; j < image_height; j++) { std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush; for (int i = 0; i < image_width; i++) { auto pixel_center = pixel00_loc + (i * pixel_delta_u) + (j * pixel_delta_v); auto ray_direction = pixel_center - camera_center; ray r(camera_center, ray_direction);
color pixel_color = ray_color(r, world);
write_color(std::cout, pixel_color); } } std::clog << "\rDone. \n"; }

清单 30: [main.cc] 具有 hittables 的新 main


这将生成一张图片,该图片实际上只是球体所在位置及其表面法线的可视化。这通常是查看几何模型的任何缺陷或特定特征的好方法。

Image 5: Resulting render of normals-colored sphere with ground

   

An Interval Class

Before we continue, we'll implement an interval class to manage real-valued intervals with a minimum and a maximum. We'll end up using this class quite often as we proceed.

 
#ifndef INTERVAL_H
#define INTERVAL_H

class interval {
  public:
    double min, max;

    interval() : min(+infinity), max(-infinity) {} // Default interval is empty

    interval(double min, double max) : min(min), max(max) {}

    double size() const {
        return max - min;
    }

    bool contains(double x) const {
        return min <= x && x <= max;
    }

    bool surrounds(double x) const {
        return min < x && x < max;
    }

    static const interval empty, universe;
};

const interval interval::empty    = interval(+infinity, -infinity);
const interval interval::universe = interval(-infinity, +infinity);

#endif

清单 31: [interval.h] 介绍新的 interval 类
 
// Common Headers

#include "color.h"
#include "interval.h"
#include "ray.h" #include "vec3.h"
Listing 32: [rtweekend.h] Including the new interval class
 
class hittable {
  public:
    ...
virtual bool hit(const ray& r, interval ray_t, hit_record& rec) const = 0;
};

清单 33: [hittable.h] 使用 interval 的 hittable::hit()
 
class hittable_list : public hittable {
  public:
    ...
bool hit(const ray& r, interval ray_t, hit_record& rec) const override {
hit_record temp_rec; bool hit_anything = false;
auto closest_so_far = ray_t.max;
for (const auto& object : objects) {
if (object->hit(r, interval(ray_t.min, closest_so_far), temp_rec)) {
hit_anything = true; closest_so_far = temp_rec.t; rec = temp_rec; } } return hit_anything; } ... };

清单 34: [hittable_list.h] 使用 interval 的 hittable_list::hit()
 
class sphere : public hittable {
  public:
    ...
bool hit(const ray& r, interval ray_t, hit_record& rec) const override {
... // Find the nearest root that lies in the acceptable range. auto root = (h - sqrtd) / a;
if (!ray_t.surrounds(root)) {
root = (h + sqrtd) / a;
if (!ray_t.surrounds(root))
return false; } ... } ... };

清单 35: [sphere.h] 使用 interval 的球体
 
color ray_color(const ray& r, const hittable& world) {
    hit_record rec;
if (world.hit(r, interval(0, infinity), rec)) {
return 0.5 * (rec.normal + color(1,1,1)); } vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); }

清单 36: [main.cc] 新的 main using interval
   

Moving Camera Code Into Its Own Class

Before continuing, now is a good time to consolidate our camera and scene-render code into a single new class: the camera class. The camera class will be responsible for two important jobs:

  1. Construct and dispatch rays into the world.
  2. Use the results of these rays to construct the rendered image.

In this refactoring, we'll collect the ray_color() function, along with the image, camera, and render sections of our main program. The new camera class will contain two public methods initialize() and render(), plus two private helper methods get_ray() and ray_color().

Ultimately, the camera will follow the simplest usage pattern that we could think of: it will be default constructed no arguments, then the owning code will modify the camera's public variables through simple assignment, and finally everything is initialized by a call to the initialize() function. This pattern is chosen instead of the owner calling a constructor with a ton of parameters or by defining and calling a bunch of setter methods. Instead, the owning code only needs to set what it explicitly cares about. Finally, we could either have the owning code call initialize(), or just have the camera call this function automatically at the start of render(). We'll use the second approach.

After main creates a camera and sets default values, it will call the render() method. The render() method will prepare the camera for rendering and then execute the render loop.

Here's the skeleton of our new camera class:

 
#ifndef CAMERA_H
#define CAMERA_H

#include "hittable.h"

class camera {
  public:
    /* Public Camera Parameters Here */

    void render(const hittable& world) {
        ...
    }

  private:
    /* Private Camera Variables Here */

    void initialize() {
        ...
    }

    color ray_color(const ray& r, const hittable& world) const {
        ...
    }
};

#endif
Listing 37: [camera.h] The camera class skeleton

To begin with, let's fill in the ray_color() function from main.cc:

 
class camera {
  ...

  private:
    ...


    color ray_color(const ray& r, const hittable& world) const {
hit_record rec; if (world.hit(r, interval(0, infinity), rec)) { return 0.5 * (rec.normal + color(1,1,1)); } vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
} }; #endif
Listing 38: [camera.h] The camera::ray_color function


现在,我们将 main() 函数中几乎所有内容都移动到新的 camera 类中。唯一的 main() 函数中剩下的是世界构造。这是 camera 类,其中新 迁移的代码:

 
class camera {
  public:
double aspect_ratio = 1.0; // Ratio of image width over height int image_width = 100; // Rendered image width in pixel count void render(const hittable& world) { initialize(); std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n"; for (int j = 0; j < image_height; j++) { std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush; for (int i = 0; i < image_width; i++) { auto pixel_center = pixel00_loc + (i * pixel_delta_u) + (j * pixel_delta_v); auto ray_direction = pixel_center - center; ray r(center, ray_direction); color pixel_color = ray_color(r, world); write_color(std::cout, pixel_color); } } std::clog << "\rDone. \n"; }
private:
int image_height; // Rendered image height point3 center; // Camera center point3 pixel00_loc; // Location of pixel 0, 0 vec3 pixel_delta_u; // Offset to pixel to the right vec3 pixel_delta_v; // Offset to pixel below void initialize() { image_height = int(image_width / aspect_ratio); image_height = (image_height < 1) ? 1 : image_height; center = point3(0, 0, 0); // Determine viewport dimensions. auto focal_length = 1.0; auto viewport_height = 2.0; auto viewport_width = viewport_height * (double(image_width)/image_height); // Calculate the vectors across the horizontal and down the vertical viewport edges. auto viewport_u = vec3(viewport_width, 0, 0); auto viewport_v = vec3(0, -viewport_height, 0); // Calculate the horizontal and vertical delta vectors from pixel to pixel. pixel_delta_u = viewport_u / image_width; pixel_delta_v = viewport_v / image_height; // Calculate the location of the upper left pixel. auto viewport_upper_left = center - vec3(0, 0, focal_length) - viewport_u/2 - viewport_v/2; pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v); }
color ray_color(const ray& r, const hittable& world) const { ... } }; #endif

清单 39: [camera.h] 工作中的相机类

And here's the much reduced main:

 
#include "rtweekend.h"

#include "camera.h"
#include "hittable.h" #include "hittable_list.h" #include "sphere.h"
color ray_color(const ray& r, const hittable& world) { ... }
int main() {
hittable_list world; world.add(make_shared<sphere>(point3(0,0,-1), 0.5)); world.add(make_shared<sphere>(point3(0,-100.5,-1), 100)); camera cam; cam.aspect_ratio = 16.0 / 9.0; cam.image_width = 400; cam.render(world);
}
Listing 40: [main.cc] The new main, using the new camera

Running this newly refactored program should give us the same rendered image as before.

   

Antialiasing

If you zoom into the rendered images so far, you might notice the harsh “stair step” nature of edges in our rendered images. This stair-stepping is commonly referred to as “aliasing”, or “jaggies”. When a real camera takes a picture, there are usually no jaggies along edges, because the edge pixels are a blend of some foreground and some background. Consider that unlike our rendered images, a true image of the world is continuous. Put another way, the world (and any true image of it) has effectively infinite resolution. We can get the same effect by averaging a bunch of samples for each pixel.


通过单个光线穿过每个像素的中心,我们正在执行通常所说的点采样。点采样的问题可以通过在很远的地方渲染一个小棋盘来说明。如果这个棋盘格由一个 8×8 的黑白图块网格组成,但只有四条光线击中它,那么所有四条光线可能只与白色图块相交,或者只与黑色图块相交,或者一些奇怪的组合。在现实世界中,当我们用眼睛感知远处的棋盘格时,我们会将其感知为灰色,而不是尖锐的黑白点。这是因为我们的眼睛自然而然地在做我们希望光线追踪器做的事情:整合落在渲染图像的特定(离散)区域的光(连续功能)。


显然,仅仅通过多次通过像素中心对同一条光线进行重采样并不会获得任何好处 — 每次我们只会得到相同的结果。相反,我们希望对落在像素周围的光线进行采样,然后对这些样本进行积分以近似于真正的连续结果。那么,我们如何整合落在像素周围的光线呢?


我们将采用最简单的模型:对以像素为中心的方形区域进行采样,该像素延伸到四个相邻像素中每个像素的一半。这不是最佳方法,但却是最直接的方法。(有关此主题的更深入探讨,请参阅 像素不是小方块

 

图 8:像素样本

   


一些随机数实用程序


我们需要一个返回实数的随机数生成器。此函数应 返回一个规范随机数,按照惯例,该随机数位于 . 0n<1 “更少 than“很重要,因为我们有时会利用这一点。

A simple approach to this is to use the std::rand() function that can be found in <cstdlib>, which returns a random integer in the range 0 and RAND_MAX. Hence we can get a real random number as desired with the following code snippet, added to rtweekend.h:

 
#include <cmath>
#include <cstdlib>
#include <iostream> #include <limits> #include <memory> ... // Utility Functions inline double degrees_to_radians(double degrees) { return degrees * pi / 180.0; }
inline double random_double() { // Returns a random real in [0,1). return std::rand() / (RAND_MAX + 1.0); } inline double random_double(double min, double max) { // Returns a random real in [min,max). return min + (max-min)*random_double(); }
Listing 41: [rtweekend.h] random_double() functions

C++ did not traditionally have a standard random number generator, but newer versions of C++ have addressed this issue with the <random> header (if imperfectly according to some experts). If you want to use this, you can obtain a random number with the conditions we need as follows:

 
...

#include <random>
...
inline double random_double() { static std::uniform_real_distribution<double> distribution(0.0, 1.0); static std::mt19937 generator; return distribution(generator); }
...
Listing 42: [rtweekend.h] random_double(), alternate implementation
   

Generating Pixels with Multiple Samples

For a single pixel composed of multiple samples, we'll select samples from the area surrounding the pixel and average the resulting light (color) values together.

First we'll update the write_color() function to account for the number of samples we use: we need to find the average across all of the samples that we take. To do this, we'll add the full color from each iteration, and then finish with a single division (by the number of samples) at the end, before writing out the color. To ensure that the color components of the final result remain within the proper [0,1] bounds, we'll add and use a small helper function: interval::clamp(x).

 
class interval {
  public:
    ...

    bool surrounds(double x) const {
        return min < x && x < max;
    }

double clamp(double x) const { if (x < min) return min; if (x > max) return max; return x; }
... };
Listing 43: [interval.h] The interval::clamp() utility function

Here's the updated write_color() function that incorporates the interval clamping function:

 
#include "interval.h"
#include "vec3.h" using color = vec3; void write_color(std::ostream& out, const color& pixel_color) { auto r = pixel_color.x(); auto g = pixel_color.y(); auto b = pixel_color.z(); // Translate the [0,1] component values to the byte range [0,255].
static const interval intensity(0.000, 0.999); int rbyte = int(256 * intensity.clamp(r)); int gbyte = int(256 * intensity.clamp(g)); int bbyte = int(256 * intensity.clamp(b));
// Write out the pixel color components. out << rbyte << ' ' << gbyte << ' ' << bbyte << '\n'; }
Listing 44: [color.h] The multi-sample write_color() function

Now let's update the camera class to define and use a new camera::get_ray(i,j) function, which will generate different samples for each pixel. This function will use a new helper function sample_square() that generates a random sample point within the unit square centered at the origin. We then transform the random sample from this ideal square back to the particular pixel we're currently sampling.

 
class camera {
  public:
    double aspect_ratio      = 1.0;  // Ratio of image width over height
    int    image_width       = 100;  // Rendered image width in pixel count
int samples_per_pixel = 10; // Count of random samples for each pixel
void render(const hittable& world) { initialize(); std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n"; for (int j = 0; j < image_height; j++) { std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush; for (int i = 0; i < image_width; i++) {
color pixel_color(0,0,0); for (int sample = 0; sample < samples_per_pixel; sample++) { ray r = get_ray(i, j); pixel_color += ray_color(r, world); } write_color(std::cout, pixel_samples_scale * pixel_color);
} } std::clog << "\rDone. \n"; } ... private: int image_height; // Rendered image height
double pixel_samples_scale; // Color scale factor for a sum of pixel samples
point3 center; // Camera center point3 pixel00_loc; // Location of pixel 0, 0 vec3 pixel_delta_u; // Offset to pixel to the right vec3 pixel_delta_v; // Offset to pixel below void initialize() { image_height = int(image_width / aspect_ratio); image_height = (image_height < 1) ? 1 : image_height;
pixel_samples_scale = 1.0 / samples_per_pixel;
center = point3(0, 0, 0); ... }
ray get_ray(int i, int j) const { // Construct a camera ray originating from the origin and directed at randomly sampled // point around the pixel location i, j. auto offset = sample_square(); auto pixel_sample = pixel00_loc + ((i + offset.x()) * pixel_delta_u) + ((j + offset.y()) * pixel_delta_v); auto ray_origin = center; auto ray_direction = pixel_sample - ray_origin; return ray(ray_origin, ray_direction); } vec3 sample_square() const { // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square. return vec3(random_double() - 0.5, random_double() - 0.5, 0); }
color ray_color(const ray& r, const hittable& world) const { ... } }; #endif
Listing 45: [camera.h] Camera with samples-per-pixel parameter

(In addition to the new sample_square() function above, you'll also find the function sample_disk() in the Github source code. This is included in case you'd like to experiment with non-square pixels, but we won't be using it in this book. sample_disk() depends on the function random_in_unit_disk() which is defined later on.)


Main 已更新以设置新的 camera 参数。

 
int main() {
    ...

    camera cam;

    cam.aspect_ratio      = 16.0 / 9.0;
    cam.image_width       = 400;
cam.samples_per_pixel = 100;
cam.render(world); }

清单 46: [main.cc] 设置新的 samples-per-pixel 参数


放大生成的图像,我们可以看到边缘像素的差异。


图 6:抗锯齿之前和之后

   

 漫反射材质


现在我们有了对象和每个像素的多条光线,我们可以制作一些看起来逼真的材质。我们将从漫反射材质(也称为遮罩)开始。一个问题是,我们是否混合和匹配几何体和材质(以便我们可以将材质分配给多个球体,反之亦然),或者几何体和材质是否紧密绑定(这对于几何体和材质链接的程序化对象可能很有用)。我们将使用 separate (这在大多数渲染器中很常见),但请注意还有其他方法。

   


简单的漫反射材质

Diffuse objects that don’t emit their own light merely take on the color of their surroundings, but they do modulate that with their own intrinsic color. Light that reflects off a diffuse surface has its direction randomized, so, if we send three rays into a crack between two diffuse surfaces they will each have different random behavior:

 
Figure 9: Light ray bounces

They might also be absorbed rather than reflected. The darker the surface, the more likely the ray is absorbed (that’s why it's dark!). Really any algorithm that randomizes direction will produce surfaces that look matte. Let's start with the most intuitive: a surface that randomly bounces a ray equally in all directions. For this material, a ray that hits the surface has an equal probability of bouncing in any direction away from the surface.

 
Figure 10: Equal reflection above the horizon

This very intuitive material is the simplest kind of diffuse and — indeed — many of the first raytracing papers used this diffuse method (before adopting a more accurate method that we'll be implementing a little bit later). We don't currently have a way to randomly reflect a ray, so we'll need to add a few functions to our vector utility header. The first thing we need is the ability to generate arbitrary random vectors:

 
class vec3 {
  public:
    ...

    double length_squared() const {
        return e[0]*e[0] + e[1]*e[1] + e[2]*e[2];
    }

static vec3 random() { return vec3(random_double(), random_double(), random_double()); } static vec3 random(double min, double max) { return vec3(random_double(min,max), random_double(min,max), random_double(min,max)); }
};
Listing 47: [vec3.h] vec3 random utility functions


然后,我们需要弄清楚如何操作随机向量,以便我们只获得半球表面上的结果。有一些分析方法可以做到这一点,但实际上它们理解起来非常复杂,而且实现起来也相当复杂。相反,我们将使用通常最简单的算法:拒绝方法。拒绝方法的工作原理是重复生成随机样本,直到我们产生满足所需标准的样本。换句话说,不断拒绝不良样本,直到找到好的样本。

There are many equally valid ways of generating a random vector on a hemisphere using the rejection method, but for our purposes we will go with the simplest, which is:

  1. Generate a random vector inside the unit sphere
  2. Normalize this vector to extend it to the sphere surface
  3. Invert the normalized vector if it falls onto the wrong hemisphere


首先,我们将使用拒绝方法生成单位球体内的随机向量(即 半径为 1 的球体)。在包含单位球体的立方体内拾取一个随机点(即,其中 x yz 都在 范围内 [1,+1] )。如果此点位于设备外部 球体,然后生成一个新的球体,直到找到位于单位球体内部或上方的球体。

 
Figure 11: Two vectors were rejected before finding a good one (pre-normalization)

 
Figure 12: The accepted random vector is normalized to produce a unit vector

Here's our first draft of the function:

 
...

inline vec3 unit_vector(const vec3& v) {
    return v / v.length();
}

inline vec3 random_unit_vector() { while (true) { auto p = vec3::random(-1,1); auto lensq = p.length_squared(); if (lensq <= 1) return p / sqrt(lensq); } }

清单 48: [vec3.h] random_unit_vector() 函数,版本 1

Sadly, we have a small floating-point abstraction leak to deal with. Since floating-point numbers have finite precision, a very small value can underflow to zero when squared. So if all three coordinates are small enough (that is, very near the center of the sphere), the norm of the vector will be zero, and thus normalizing will yield the bogus vector [±,±,±]. To fix this, we'll also reject points that lie inside this “black hole” around the center. With double precision (64-bit floats), we can safely support values greater than 10160.

Here's our more robust function:

 
inline vec3 random_unit_vector() {
    while (true) {
        auto p = vec3::random(-1,1);
        auto lensq = p.length_squared();
if (1e-160 < lensq && lensq <= 1)
return p / sqrt(lensq); } }

清单 49: [vec3.h] random_unit_vector() 函数,版本 1


现在我们在单位球体的表面上有一个随机向量,我们可以确定它是否在 通过与表面法线比较来校正半球:

 

图 13:法线向量告诉我们需要哪个半球

We can take the dot product of the surface normal and our random vector to determine if it's in the correct hemisphere. If the dot product is positive, then the vector is in the correct hemisphere. If the dot product is negative, then we need to invert the vector.

 
...

inline vec3 random_unit_vector() {
    while (true) {
        auto p = vec3::random(-1,1);
        auto lensq = p.length_squared();
        if (1e-160 < lensq && lensq <= 1)
            return p / sqrt(lensq);
    }
}

inline vec3 random_on_hemisphere(const vec3& normal) { vec3 on_unit_sphere = random_unit_vector(); if (dot(on_unit_sphere, normal) > 0.0) // In the same hemisphere as the normal return on_unit_sphere; else return -on_unit_sphere; }
Listing 50: [vec3.h] The random_on_hemisphere() function

If a ray bounces off of a material and keeps 100% of its color, then we say that the material is white. If a ray bounces off of a material and keeps 0% of its color, then we say that the material is black. As a first demonstration of our new diffuse material we'll set the ray_color function to return 50% of the color from a bounce. We should expect to get a nice gray color.

 
class camera {
  ...
  private:
    ...
    color ray_color(const ray& r, const hittable& world) const {
        hit_record rec;

        if (world.hit(r, interval(0, infinity), rec)) {
vec3 direction = random_on_hemisphere(rec.normal); return 0.5 * ray_color(ray(rec.p, direction), world);
} vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); } };
Listing 51: [camera.h] ray_color() using a random ray direction

... Indeed we do get rather nice gray spheres:

Image 7: First render of a diffuse sphere

   

Limiting the Number of Child Rays

There's one potential problem lurking here. Notice that the ray_color function is recursive. When will it stop recursing? When it fails to hit anything. In some cases, however, that may be a long time — long enough to blow the stack. To guard against that, let's limit the maximum recursion depth, returning no light contribution at the maximum depth:

 
class camera {
  public:
    double aspect_ratio      = 1.0;  // Ratio of image width over height
    int    image_width       = 100;  // Rendered image width in pixel count
    int    samples_per_pixel = 10;   // Count of random samples for each pixel
int max_depth = 10; // Maximum number of ray bounces into scene
void render(const hittable& world) { initialize(); std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n"; for (int j = 0; j < image_height; j++) { std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush; for (int i = 0; i < image_width; i++) { color pixel_color(0,0,0); for (int sample = 0; sample < samples_per_pixel; sample++) { ray r = get_ray(i, j);
pixel_color += ray_color(r, max_depth, world);
} write_color(std::cout, pixel_samples_scale * pixel_color); } } std::clog << "\rDone. \n"; } ... private: ...
color ray_color(const ray& r, int depth, const hittable& world) const { // If we've exceeded the ray bounce limit, no more light is gathered. if (depth <= 0) return color(0,0,0);
hit_record rec; if (world.hit(r, interval(0, infinity), rec)) { vec3 direction = random_on_hemisphere(rec.normal);
return 0.5 * ray_color(ray(rec.p, direction), depth-1, world);
} vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); } };
Listing 52: [camera.h] camera::ray_color() with depth limiting

Update the main() function to use this new depth limit:

 
int main() {
    ...

    camera cam;

    cam.aspect_ratio      = 16.0 / 9.0;
    cam.image_width       = 400;
    cam.samples_per_pixel = 100;
cam.max_depth = 50;
cam.render(world); }
Listing 53: [main.cc] Using the new ray depth limiting

For this very simple scene we should get basically the same result:

Image 8: Second render of a diffuse sphere with limited bounces

   

Fixing Shadow Acne

There’s also a subtle bug that we need to address. A ray will attempt to accurately calculate the intersection point when it intersects with a surface. Unfortunately for us, this calculation is susceptible to floating point rounding errors which can cause the intersection point to be ever so slightly off. This means that the origin of the next ray, the ray that is randomly scattered off of the surface, is unlikely to be perfectly flush with the surface. It might be just above the surface. It might be just below the surface. If the ray's origin is just below the surface then it could intersect with that surface again. Which means that it will find the nearest surface at t=0.00000001 or whatever floating point approximation the hit function gives us. The simplest hack to address this is just to ignore hits that are very close to the calculated intersection point:

 
class camera {
  ...
  private:
    ...
    color ray_color(const ray& r, int depth, const hittable& world) const {
        // If we've exceeded the ray bounce limit, no more light is gathered.
        if (depth <= 0)
            return color(0,0,0);

        hit_record rec;

if (world.hit(r, interval(0.001, infinity), rec)) {
vec3 direction = random_on_hemisphere(rec.normal); return 0.5 * ray_color(ray(rec.p, direction), depth-1, world); } vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); } };
Listing 54: [camera.h] Calculating reflected ray origins with tolerance


这消除了阴影痤疮问题。是的,它真的是这样叫的。结果如下:


图 9:无阴影痤疮的漫射球体

   


真正的朗伯反射


在半球上均匀散射反射光线会产生一个很好的软漫反射模型,但我们可以 绝对做得更好。真实漫反射对象的更准确表示是 Lambertian 分配。此分布以与 cos(ϕ) 成比例的方式散射反射光线,其中 ϕ 是反射光线与表面法线之间的角度。这意味着 反射光线最有可能在接近表面法线的方向上散射,而不太可能 在远离法线的方向上散布。这种非均匀的朗伯分布效果更好 在现实世界中对材质反射进行建模的工作,而不是我们之前的均匀散射。


我们可以通过在法向量中添加一个随机单位向量来创建这个分布。在 intersection 在表面上有 Hit Point , p ,并且有 表面、 n .在交点处,这个表面正好有两条边,所以有 只能是两个与任何交点相切的唯一单位球体(每个球体一个唯一球体 表面的一侧)。这两个单位球体将从表面位移 它们的半径,对于一个单位球体来说,这个半径正好是 1。


一个球体将沿表面的法线 ( n ) 方向移动,一个球体将发生位移 将向相反的方向移动 ( n )。这给我们留下了两个单位球 大小,该大小将接触交点处的表面。由此,其中一个 球体的 Center 位于 ,另一个 Sphere 的 Center 位于 (P+n) center 位于 (Pn) 。圆心位于 (Pn) 的球体为 考虑在表面内部,而具有 CENTER (P+n) 的球体是 考虑在表面之外

We want to select the tangent unit sphere that is on the same side of the surface as the ray origin. Pick a random point S on this unit radius sphere and send a ray from the hit point P to the random point S (this is the vector (SP)):

 
Figure 14: Randomly generating a vector according to Lambertian distribution

The change is actually fairly minimal:

 
class camera {
    ...
    color ray_color(const ray& r, int depth, const hittable& world) const {
        // If we've exceeded the ray bounce limit, no more light is gathered.
        if (depth <= 0)
            return color(0,0,0);

        hit_record rec;

        if (world.hit(r, interval(0.001, infinity), rec)) {
vec3 direction = rec.normal + random_unit_vector();
return 0.5 * ray_color(ray(rec.p, direction), depth-1, world); } vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); } };
Listing 55: [camera.h] ray_color() with replacement diffuse

After rendering we get a similar image:

Image 10: Correct rendering of Lambertian spheres

It's hard to tell the difference between these two diffuse methods, given that our scene of two spheres is so simple, but you should be able to notice two important visual differences:

  1. The shadows are more pronounced after the change
  2. Both spheres are tinted blue from the sky after the change

Both of these changes are due to the less uniform scattering of the light rays—more rays are scattering toward the normal. This means that for diffuse objects, they will appear darker because less light bounces toward the camera. For the shadows, more light bounces straight-up, so the area underneath the sphere is darker.

Not a lot of common, everyday objects are perfectly diffuse, so our visual intuition of how these objects behave under light can be poorly formed. As scenes become more complicated over the course of the book, you are encouraged to switch between the different diffuse renderers presented here. Most scenes of interest will contain a large amount of diffuse materials. You can gain valuable insight by understanding the effect of different diffuse methods on the lighting of a scene.

   

Using Gamma Correction for Accurate Color Intensity

Note the shadowing under the sphere. The picture is very dark, but our spheres only absorb half the energy of each bounce, so they are 50% reflectors. The spheres should look pretty bright (in real life, a light grey) but they appear to be rather dark. We can see this more clearly if we walk through the full brightness gamut for our diffuse material. We start by setting the reflectance of the ray_color function from 0.5 (50%) to 0.1 (10%):

 
class camera {
    ...
    color ray_color(const ray& r, int depth, const hittable& world) const {
        // If we've exceeded the ray bounce limit, no more light is gathered.
        if (depth <= 0)
            return color(0,0,0);

        hit_record rec;

        if (world.hit(r, interval(0.001, infinity), rec)) {
            vec3 direction = rec.normal + random_unit_vector();
return 0.1 * ray_color(ray(rec.p, direction), depth-1, world);
} vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); } };
Listing 56: [camera.h] ray_color() with 10% reflectance

We render out at this new 10% reflectance. We then set reflectance to 30% and render again. We repeat for 50%, 70%, and finally 90%. You can overlay these images from left to right in the photo editor of your choice and you should get a very nice visual representation of the increasing brightness of your chosen gamut. This is the one that we've been working with so far:

Image 11: The gamut of our renderer so far

If you look closely, or if you use a color picker, you should notice that the 50% reflectance render (the one in the middle) is far too dark to be half-way between white and black (middle-gray). Indeed, the 70% reflector is closer to middle-gray. The reason for this is that almost all computer programs assume that an image is “gamma corrected” before being written into an image file. This means that the 0 to 1 values have some transform applied before being stored as a byte. Images with data that are written without being transformed are said to be in linear space, whereas images that are transformed are said to be in gamma space. It is likely that the image viewer you are using is expecting an image in gamma space, but we are giving it an image in linear space. This is the reason why our image appears inaccurately dark.

There are many good reasons for why images should be stored in gamma space, but for our purposes we just need to be aware of it. We are going to transform our data into gamma space so that our image viewer can more accurately display our image. As a simple approximation, we can use “gamma 2” as our transform, which is the power that you use when going from gamma space to linear space. We need to go from linear space to gamma space, which means taking the inverse of “gamma 2", which means an exponent of 1/gamma, which is just the square-root. We'll also want to ensure that we robustly handle negative inputs.

 
inline double linear_to_gamma(double linear_component) { if (linear_component > 0) return std::sqrt(linear_component); return 0; }
void write_color(std::ostream& out, const color& pixel_color) { auto r = pixel_color.x(); auto g = pixel_color.y(); auto b = pixel_color.z();
// Apply a linear to gamma transform for gamma 2 r = linear_to_gamma(r); g = linear_to_gamma(g); b = linear_to_gamma(b);
// Translate the [0,1] component values to the byte range [0,255]. static const interval intensity(0.000, 0.999); int rbyte = int(256 * intensity.clamp(r)); int gbyte = int(256 * intensity.clamp(g)); int bbyte = int(256 * intensity.clamp(b)); // Write out the pixel color components. out << rbyte << ' ' << gbyte << ' ' << bbyte << '\n'; }
Listing 57: [color.h] write_color(), with gamma correction

Using this gamma correction, we now get a much more consistent ramp from darkness to lightness:

Image 12: The gamut of our renderer, gamma-corrected

   

Metal

   

An Abstract Class for Materials

If we want different objects to have different materials, we have a design decision. We could have a universal material type with lots of parameters so any individual material type could just ignore the parameters that don't affect it. This is not a bad approach. Or we could have an abstract material class that encapsulates unique behavior. I am a fan of the latter approach. For our program the material needs to do two things:

  1. Produce a scattered ray (or say it absorbed the incident ray).
  2. If scattered, say how much the ray should be attenuated.

This suggests the abstract class:

 
#ifndef MATERIAL_H
#define MATERIAL_H

#include "hittable.h"

class material {
  public:
    virtual ~material() = default;

    virtual bool scatter(
        const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered
    ) const {
        return false;
    }
};

#endif
Listing 58: [material.h] The material class

   

A Data Structure to Describe Ray-Object Intersections

The hit_record is to avoid a bunch of arguments so we can stuff whatever info we want in there. You can use arguments instead of an encapsulated type, it’s just a matter of taste. Hittables and materials need to be able to reference the other's type in code so there is some circularity of the references. In C++ we add the line class material; to tell the compiler that material is a class that will be defined later. Since we're just specifying a pointer to the class, the compiler doesn't need to know the details of the class, solving the circular reference issue.

 
class material;
class hit_record { public: point3 p; vec3 normal;
shared_ptr<material> mat;
double t; bool front_face; void set_face_normal(const ray& r, const vec3& outward_normal) { front_face = dot(r.direction(), outward_normal) < 0; normal = front_face ? outward_normal : -outward_normal; } };
Listing 59: [hittable.h] Hit record with added material pointer

hit_record is just a way to stuff a bunch of arguments into a class so we can send them as a group. When a ray hits a surface (a particular sphere for example), the material pointer in the hit_record will be set to point at the material pointer the sphere was given when it was set up in main() when we start. When the ray_color() routine gets the hit_record it can call member functions of the material pointer to find out what ray, if any, is scattered.

To achieve this, hit_record needs to be told the material that is assigned to the sphere.

 
class sphere : public hittable {
  public:
sphere(const point3& center, double radius) : center(center), radius(std::fmax(0,radius)) { // TODO: Initialize the material pointer `mat`. }
bool hit(const ray& r, interval ray_t, hit_record& rec) const override { ... rec.t = root; rec.p = r.at(rec.t); vec3 outward_normal = (rec.p - center) / radius; rec.set_face_normal(r, outward_normal);
rec.mat = mat;
return true; } private: point3 center; double radius;
shared_ptr<material> mat;
};
Listing 60: [sphere.h] Ray-sphere intersection with added material information

   

Modeling Light Scatter and Reflectance

Here and throughout these books we will use the term albedo (Latin for “whiteness”). Albedo is a precise technical term in some disciplines, but in all cases it is used to define some form of fractional reflectance. Albedo will vary with material color and (as we will later implement for glass materials) can also vary with incident viewing direction (the direction of the incoming ray).

Lambertian (diffuse) reflectance can either always scatter and attenuate light according to its reflectance R, or it can sometimes scatter (with probability 1R) with no attenuation (where a ray that isn't scattered is just absorbed into the material). It could also be a mixture of both those strategies. We will choose to always scatter, so implementing Lambertian materials becomes a simple task:

 
class material {
    ...
};

class lambertian : public material { public: lambertian(const color& albedo) : albedo(albedo) {} bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered) const override { auto scatter_direction = rec.normal + random_unit_vector(); scattered = ray(rec.p, scatter_direction); attenuation = albedo; return true; } private: color albedo; };
Listing 61: [material.h] The new lambertian material class

Note the third option: we could scatter with some fixed probability p and have attenuation be albedo/p. Your choice.

If you read the code above carefully, you'll notice a small chance of mischief. If the random unit vector we generate is exactly opposite the normal vector, the two will sum to zero, which will result in a zero scatter direction vector. This leads to bad scenarios later on (infinities and NaNs), so we need to intercept the condition before we pass it on.

In service of this, we'll create a new vector method — vec3::near_zero() — that returns true if the vector is very close to zero in all dimensions.

The following changes will use the C++ standard library function std::fabs, which returns the absolute value of its input.

 
class vec3 {
    ...

    double length_squared() const {
        return e[0]*e[0] + e[1]*e[1] + e[2]*e[2];
    }

bool near_zero() const { // Return true if the vector is close to zero in all dimensions. auto s = 1e-8; return (std::fabs(e[0]) < s) && (std::fabs(e[1]) < s) && (std::fabs(e[2]) < s); }
... };
Listing 62: [vec3.h] The vec3::near_zero() method
 
class lambertian : public material {
  public:
    lambertian(const color& albedo) : albedo(albedo) {}

    bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
    const override {
        auto scatter_direction = rec.normal + random_unit_vector();

// Catch degenerate scatter direction if (scatter_direction.near_zero()) scatter_direction = rec.normal;
scattered = ray(rec.p, scatter_direction); attenuation = albedo; return true; } private: color albedo; };
Listing 63: [material.h] Lambertian scatter, bullet-proof
   

Mirrored Light Reflection

For polished metals the ray won’t be randomly scattered. The key question is: How does a ray get reflected from a metal mirror? Vector math is our friend here:

 
Figure 15: Ray reflection

The reflected ray direction in red is just v+2b. In our design, n is a unit vector (length one), but v may not be. To get the vector b, we scale the normal vector by the length of the projection of v onto n, which is given by the dot product vn. (If n were not a unit vector, we would also need to divide this dot product by the length of n.) Finally, because v points into the surface, and we want b to point out of the surface, we need to negate this projection length.

Putting everything together, we get the following computation of the reflected vector:

 
...

inline vec3 random_on_hemisphere(const vec3& normal) {
    ...
}

inline vec3 reflect(const vec3& v, const vec3& n) { return v - 2*dot(v,n)*n; }
Listing 64: [vec3.h] vec3 reflection function

The metal material just reflects rays using that formula:

 
...

class lambertian : public material {
    ...
};

class metal : public material { public: metal(const color& albedo) : albedo(albedo) {} bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered) const override { vec3 reflected = reflect(r_in.direction(), rec.normal); scattered = ray(rec.p, reflected); attenuation = albedo; return true; } private: color albedo; };
Listing 65: [material.h] Metal material with reflectance function


我们需要修改 ray_color() 函数来进行所有更改:

 
#include "hittable.h"
#include "material.h"
... class camera { ... private: ... color ray_color(const ray& r, int depth, const hittable& world) const { // If we've exceeded the ray bounce limit, no more light is gathered. if (depth <= 0) return color(0,0,0); hit_record rec; if (world.hit(r, interval(0.001, infinity), rec)) {
ray scattered; color attenuation; if (rec.mat->scatter(r, rec, attenuation, scattered)) return attenuation * ray_color(scattered, depth-1, world); return color(0,0,0);
} vec3 unit_direction = unit_vector(r.direction()); auto a = 0.5*(unit_direction.y() + 1.0); return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0); } };

清单 66: [camera.h] 具有散射反射率的光线颜色


现在,我们将更新 sphere 构造函数以初始化材质指针 Mat

 
class sphere : public hittable {
  public:
sphere(const point3& center, double radius, shared_ptr<material> mat) : center(center), radius(std::fmax(0,radius)), mat(mat) {}
... };

清单 67: [sphere.h] 使用材质初始化 sphere
   


带有金属球的场景


现在,让我们向场景中添加一些金属球体:

 
#include "rtweekend.h"

#include "camera.h"
#include "hittable.h"
#include "hittable_list.h"
#include "material.h"
#include "sphere.h" int main() { hittable_list world;
auto material_ground = make_shared<lambertian>(color(0.8, 0.8, 0.0)); auto material_center = make_shared<lambertian>(color(0.1, 0.2, 0.5)); auto material_left = make_shared<metal>(color(0.8, 0.8, 0.8)); auto material_right = make_shared<metal>(color(0.8, 0.6, 0.2)); world.add(make_shared<sphere>(point3( 0.0, -100.5, -1.0), 100.0, material_ground)); world.add(make_shared<sphere>(point3( 0.0, 0.0, -1.2), 0.5, material_center)); world.add(make_shared<sphere>(point3(-1.0, 0.0, -1.0), 0.5, material_left)); world.add(make_shared<sphere>(point3( 1.0, 0.0, -1.0), 0.5, material_right));
camera cam; cam.aspect_ratio = 16.0 / 9.0; cam.image_width = 400; cam.samples_per_pixel = 100; cam.max_depth = 50; cam.render(world); }
Listing 68: [main.cc] Scene with metal spheres

Which gives:

Image 13: Shiny metal

   

Fuzzy Reflection

We can also randomize the reflected direction by using a small sphere and choosing a new endpoint for the ray. We'll use a random point from the surface of a sphere centered on the original endpoint, scaled by the fuzz factor.

 
Figure 16: Generating fuzzed reflection rays

The bigger the fuzz sphere, the fuzzier the reflections will be. This suggests adding a fuzziness parameter that is just the radius of the sphere (so zero is no perturbation). The catch is that for big spheres or grazing rays, we may scatter below the surface. We can just have the surface absorb those.

Also note that in order for the fuzz sphere to make sense, it needs to be consistently scaled compared to the reflection vector, which can vary in length arbitrarily. To address this, we need to normalize the reflected ray.

 
class metal : public material {
  public:
metal(const color& albedo, double fuzz) : albedo(albedo), fuzz(fuzz < 1 ? fuzz : 1) {}
bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered) const override { vec3 reflected = reflect(r_in.direction(), rec.normal);
reflected = unit_vector(reflected) + (fuzz * random_unit_vector());
scattered = ray(rec.p, reflected); attenuation = albedo;
return (dot(scattered.direction(), rec.normal) > 0);
} private: color albedo;
double fuzz;
};
Listing 69: [material.h] Metal material fuzziness

We can try that out by adding fuzziness 0.3 and 1.0 to the metals:

 
int main() {
    ...
    auto material_ground = make_shared<lambertian>(color(0.8, 0.8, 0.0));
    auto material_center = make_shared<lambertian>(color(0.1, 0.2, 0.5));
auto material_left = make_shared<metal>(color(0.8, 0.8, 0.8), 0.3); auto material_right = make_shared<metal>(color(0.8, 0.6, 0.2), 1.0);
... }
Listing 70: [main.cc] Metal spheres with fuzziness

Image 14: Fuzzed metal

   

Dielectrics

Clear materials such as water, glass, and diamond are dielectrics. When a light ray hits them, it splits into a reflected ray and a refracted (transmitted) ray. We’ll handle that by randomly choosing between reflection and refraction, only generating one scattered ray per interaction.

As a quick review of terms, a reflected ray hits a surface and then “bounces” off in a new direction.

A refracted ray bends as it transitions from a material's surroundings into the material itself (as with glass or water). This is why a pencil looks bent when partially inserted in water.

The amount that a refracted ray bends is determined by the material's refractive index. Generally, this is a single value that describes how much light bends when entering a material from a vacuum. Glass has a refractive index of something like 1.5–1.7, diamond is around 2.4, and air has a small refractive index of 1.000293.

When a transparent material is embedded in a different transparent material, you can describe the refraction with a relative refraction index: the refractive index of the object's material divided by the refractive index of the surrounding material. For example, if you want to render a glass ball under water, then the glass ball would have an effective refractive index of 1.125. This is given by the refractive index of glass (1.5) divided by the refractive index of water (1.333).

You can find the refractive index of most common materials with a quick internet search.

   

Refraction

The hardest part to debug is the refracted ray. I usually first just have all the light refract if there is a refraction ray at all. For this project, I tried to put two glass balls in our scene, and I got this (I have not told you how to do this right or wrong yet, but soon!):

Image 15: Glass first

Is that right? Glass balls look odd in real life. But no, it isn’t right. The world should be flipped upside down and no weird black stuff. I just printed out the ray straight through the middle of the image and it was clearly wrong. That often does the job.

   

Snell's Law

The refraction is described by Snell’s law:

ηsinθ=ηsinθ

Where θ and θ are the angles from the normal, and η and η (pronounced “eta” and “eta prime”) are the refractive indices. The geometry is:

 
Figure 17: Ray refraction

In order to determine the direction of the refracted ray, we have to solve for sinθ:

sinθ=ηηsinθ

On the refracted side of the surface there is a refracted ray R and a normal n, and there exists an angle, θ, between them. We can split R into the parts of the ray that are perpendicular to n and parallel to n:

R=R+R

If we solve for R and R we get:

R=ηη(R+cosθn)
R=1|R|2n

You can go ahead and prove this for yourself if you want, but we will treat it as fact and move on. The rest of the book will not require you to understand the proof.

We know the value of every term on the right-hand side except for cosθ. It is well known that the dot product of two vectors can be explained in terms of the cosine of the angle between them:

ab=|a||b|cosθ

If we restrict a and b to be unit vectors:

ab=cosθ

We can now rewrite R in terms of known quantities:

R=ηη(R+(Rn)n)

When we combine them back together, we can write a function to calculate R:

 
...

inline vec3 reflect(const vec3& v, const vec3& n) {
    return v - 2*dot(v,n)*n;
}

inline vec3 refract(const vec3& uv, const vec3& n, double etai_over_etat) { auto cos_theta = std::fmin(dot(-uv, n), 1.0); vec3 r_out_perp = etai_over_etat * (uv + cos_theta*n); vec3 r_out_parallel = -std::sqrt(std::fabs(1.0 - r_out_perp.length_squared())) * n; return r_out_perp + r_out_parallel; }
Listing 71: [vec3.h] Refraction function

And the dielectric material that always refracts is:

 
...

class metal : public material {
    ...
};

class dielectric : public material { public: dielectric(double refraction_index) : refraction_index(refraction_index) {} bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered) const override { attenuation = color(1.0, 1.0, 1.0); double ri = rec.front_face ? (1.0/refraction_index) : refraction_index; vec3 unit_direction = unit_vector(r_in.direction()); vec3 refracted = refract(unit_direction, rec.normal, ri); scattered = ray(rec.p, refracted); return true; } private: // Refractive index in vacuum or air, or the ratio of the material's refractive index over // the refractive index of the enclosing media double refraction_index; };
Listing 72: [material.h] Dielectric material class that always refracts

Now we'll update the scene to illustrate refraction by changing the left sphere to glass, which has an index of refraction of approximately 1.5.

 
auto material_ground = make_shared<lambertian>(color(0.8, 0.8, 0.0));
auto material_center = make_shared<lambertian>(color(0.1, 0.2, 0.5));
auto material_left = make_shared<dielectric>(1.50);
auto material_right = make_shared<metal>(color(0.8, 0.6, 0.2), 1.0);
Listing 73: [main.cc] Changing the left sphere to glass

This gives us the following result:

Image 16: Glass sphere that always refracts

   

Total Internal Reflection

One troublesome practical issue with refraction is that there are ray angles for which no solution is possible using Snell's law. When a ray enters a medium of lower index of refraction at a sufficiently glancing angle, it can refract with an angle greater than 90°. If we refer back to Snell's law and the derivation of sinθ:

sinθ=ηηsinθ

If the ray is inside glass and outside is air (η=1.5 and η=1.0):

sinθ=1.51.0sinθ

The value of sinθ cannot be greater than 1. So, if,

1.51.0sinθ>1.0

the equality between the two sides of the equation is broken, and a solution cannot exist. If a solution does not exist, the glass cannot refract, and therefore must reflect the ray:

 
if (ri * sin_theta > 1.0) {
    // Must Reflect
    ...
} else {
    // Can Refract
    ...
}
Listing 74: [material.h] Determining if the ray can refract

Here all the light is reflected, and because in practice that is usually inside solid objects, it is called total internal reflection. This is why sometimes the water-to-air boundary acts as a perfect mirror when you are submerged — if you're under water looking up, you can see things above the water, but when you are close to the surface and looking sideways, the water surface looks like a mirror.


我们可以使用三角特性来求解 sin_theta

sinθ=1cos2θ

 

cosθ=Rn

 
double cos_theta = std::fmin(dot(-unit_direction, rec.normal), 1.0);
double sin_theta = std::sqrt(1.0 - cos_theta*cos_theta);

if (ri * sin_theta > 1.0) {
    // Must Reflect
    ...
} else {
    // Can Refract
    ...
}

清单 75: [material.h] 确定光线是否可以折射


始终折射的介电材料(如果可能)是:

 
class dielectric : public material {
  public:
    dielectric(double refraction_index) : refraction_index(refraction_index) {}

    bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
    const override {
        attenuation = color(1.0, 1.0, 1.0);
        double ri = rec.front_face ? (1.0/refraction_index) : refraction_index;

        vec3 unit_direction = unit_vector(r_in.direction());
double cos_theta = std::fmin(dot(-unit_direction, rec.normal), 1.0); double sin_theta = std::sqrt(1.0 - cos_theta*cos_theta); bool cannot_refract = ri * sin_theta > 1.0; vec3 direction; if (cannot_refract) direction = reflect(unit_direction, rec.normal); else direction = refract(unit_direction, rec.normal, ri); scattered = ray(rec.p, direction);
return true; } private: // Refractive index in vacuum or air, or the ratio of the material's refractive index over // the refractive index of the enclosing media double refraction_index; };

清单 76: [material.h] 带反射的介电材料类


Attenuation 始终为 1 — 玻璃表面不吸收任何内容。


如果我们使用新的 dielectric::scatter() 函数渲染前一个场景,我们会看到 ...没有变化。哼?


嗯,事实证明,给定一个折射率大于空气的材质球体,没有入射角会产生全内反射——无论是在射线球的入口点还是在射线出口处。这是由于球体的几何形状,因为掠射的入射光线将始终弯曲到较小的角度,然后在出口时弯曲回原始角度。


那么,我们如何说明完全的内在反思呢?好吧,如果球体的折射率小于它所在的介质,那么我们可以用较浅的掠射角撞击它,获得完全的外部反射。这应该足以观察效果。


我们将模拟一个充满水(折射率约为 1.33)的世界,并将球体材质更改为空气(折射率 1.00)——一个气泡!为此,请将左侧球体材质的折射率更改为

index of refraction of airindex of refraction of water

 
auto material_ground = make_shared<lambertian>(color(0.8, 0.8, 0.0));
auto material_center = make_shared<lambertian>(color(0.1, 0.2, 0.5));
auto material_left = make_shared<dielectric>(1.00 / 1.33);
auto material_right = make_shared<metal>(color(0.8, 0.6, 0.2), 1.0);

清单 77: [main.cc] 左球体是水中的气泡


此更改将生成以下渲染:


图 17:气泡有时会折射,有时会反射


在这里,您可以看到或多或少的直达光线会折射,而掠射光线会反射。

   

 Schlick 近似


现在,真正的玻璃具有随角度变化的反射率——以陡峭的角度观察窗户,它就会变成一面镜子。有一个大而丑陋的方程式,但几乎每个人都使用 Christophe Schlick 的一个廉价且非常准确的多项式近似。这产生了我们的全玻璃材料:

 
class dielectric : public material {
  public:
    dielectric(double refraction_index) : refraction_index(refraction_index) {}

    bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
    const override {
        attenuation = color(1.0, 1.0, 1.0);
        double ri = rec.front_face ? (1.0/refraction_index) : refraction_index;

        vec3 unit_direction = unit_vector(r_in.direction());
        double cos_theta = std::fmin(dot(-unit_direction, rec.normal), 1.0);
        double sin_theta = std::sqrt(1.0 - cos_theta*cos_theta);

        bool cannot_refract = ri * sin_theta > 1.0;
        vec3 direction;

if (cannot_refract || reflectance(cos_theta, ri) > random_double())
direction = reflect(unit_direction, rec.normal); else direction = refract(unit_direction, rec.normal, ri); scattered = ray(rec.p, direction); return true; } private: // Refractive index in vacuum or air, or the ratio of the material's refractive index over // the refractive index of the enclosing media double refraction_index;
static double reflectance(double cosine, double refraction_index) { // Use Schlick's approximation for reflectance. auto r0 = (1 - refraction_index) / (1 + refraction_index); r0 = r0*r0; return r0 + (1-r0)*std::pow((1 - cosine),5); }
};

清单 78: [material.h] 全玻璃材质
   


为空心玻璃球建模


让我们对一个空心玻璃球进行建模。这是一个具有一定厚度的球体,里面有另一个空气球体。如果考虑光线穿过此类对象的路径,它将撞击外部球体、折射、撞击内部球体(假设我们确实击中它)、第二次折射,并穿过内部的空气。然后它将继续前进,撞击内球体的内表面,折射回来,然后撞击外球体的内表面,最后折射并退出到场景大气中。


外球体只是使用标准玻璃球体建模,折射率约为 1.50(模拟从外部空气到玻璃中的折射)。内球体有点不同,因为它的折射率应该是相对于周围外球体的材质的,从而模拟了从玻璃到内空气的过渡。


这实际上很容易指定,作为电介质材料的 refraction_index 参数 可以解释为物体的折射率除以折射率的值 封闭介质的索引。在这种情况下,内球体的折射率为空气 (内球体材质)在玻璃(封闭介质)的折射率上,或 1.00/1.50=0.67 .

 这是代码:

 
...
auto material_ground = make_shared<lambertian>(color(0.8, 0.8, 0.0));
auto material_center = make_shared<lambertian>(color(0.1, 0.2, 0.5));
auto material_left = make_shared<dielectric>(1.50); auto material_bubble = make_shared<dielectric>(1.00 / 1.50);
auto material_right = make_shared<metal>(color(0.8, 0.6, 0.2), 0.0); world.add(make_shared<sphere>(point3( 0.0, -100.5, -1.0), 100.0, material_ground)); world.add(make_shared<sphere>(point3( 0.0, 0.0, -1.2), 0.5, material_center)); world.add(make_shared<sphere>(point3(-1.0, 0.0, -1.0), 0.5, material_left));
world.add(make_shared<sphere>(point3(-1.0, 0.0, -1.0), 0.4, material_bubble));
world.add(make_shared<sphere>(point3( 1.0, 0.0, -1.0), 0.5, material_right)); ...

清单 79: [main.cc] 带有空心玻璃球体的场景

 结果如下:

Image 18: A hollow glass sphere

   

Positionable Camera

Cameras, like dielectrics, are a pain to debug, so I always develop mine incrementally. First, let’s allow for an adjustable field of view (fov). This is the visual angle from edge to edge of the rendered image. Since our image is not square, the fov is different horizontally and vertically. I always use vertical fov. I also usually specify it in degrees and change to radians inside a constructor — a matter of personal taste.

   

Camera Viewing Geometry

First, we'll keep the rays coming from the origin and heading to the z=1 plane. We could make it the z=2 plane, or whatever, as long as we made h a ratio to that distance. Here is our setup:

 
Figure 18: Camera viewing geometry (from the side)


这意味着 h=tan(θ2) .我们的相机现在变成了:

 
class camera {
  public:
    double aspect_ratio      = 1.0;  // Ratio of image width over height
    int    image_width       = 100;  // Rendered image width in pixel count
    int    samples_per_pixel = 10;   // Count of random samples for each pixel
    int    max_depth         = 10;   // Maximum number of ray bounces into scene

double vfov = 90; // Vertical view angle (field of view)
void render(const hittable& world) { ... private: ... void initialize() { image_height = int(image_width / aspect_ratio); image_height = (image_height < 1) ? 1 : image_height; pixel_samples_scale = 1.0 / samples_per_pixel; center = point3(0, 0, 0); // Determine viewport dimensions. auto focal_length = 1.0;
auto theta = degrees_to_radians(vfov); auto h = std::tan(theta/2); auto viewport_height = 2 * h * focal_length;
auto viewport_width = viewport_height * (double(image_width)/image_height); // Calculate the vectors across the horizontal and down the vertical viewport edges. auto viewport_u = vec3(viewport_width, 0, 0); auto viewport_v = vec3(0, -viewport_height, 0); // Calculate the horizontal and vertical delta vectors from pixel to pixel. pixel_delta_u = viewport_u / image_width; pixel_delta_v = viewport_v / image_height; // Calculate the location of the upper left pixel. auto viewport_upper_left = center - vec3(0, 0, focal_length) - viewport_u/2 - viewport_v/2; pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v); } ... };

清单 80: [camera.h] 具有可调视野 (fov) 的相机


我们将使用两个相互接触的球体的简单场景,使用 90° 视野来测试这些变化。

 
int main() {
    hittable_list world;

auto R = std::cos(pi/4); auto material_left = make_shared<lambertian>(color(0,0,1)); auto material_right = make_shared<lambertian>(color(1,0,0)); world.add(make_shared<sphere>(point3(-R, 0, -1), R, material_left)); world.add(make_shared<sphere>(point3( R, 0, -1), R, material_right));
camera cam; cam.aspect_ratio = 16.0 / 9.0; cam.image_width = 400; cam.samples_per_pixel = 100; cam.max_depth = 50;
cam.vfov = 90;
cam.render(world); }

清单 81: [main.cc] 带有广角相机的场景


这给了我们渲染:


图 19:广角视图

   


定位和调整摄像机的方向

To get an arbitrary viewpoint, let’s first name the points we care about. We’ll call the position where we place the camera lookfrom, and the point we look at lookat. (Later, if you want, you could define a direction to look in instead of a point to look at.)

We also need a way to specify the roll, or sideways tilt, of the camera: the rotation around the lookat-lookfrom axis. Another way to think about it is that even if you keep lookfrom and lookat constant, you can still rotate your head around your nose. What we need is a way to specify an “up” vector for the camera.

 
Figure 19: Camera view direction


我们可以指定任何我们想要的上方向,只要它不平行于视图方向即可。项目 此 Up Vector 到与 View 方向正交的平面上,以获得摄像机相对的 Up Vector。我 使用将其命名为 “View Up” (VUP) 向量的常见约定。经过几个交叉产品 和向量归一化,我们现在有一个完整的正交基 (u,v,w) 来描述我们的 相机的方向。 u 将是指向 Camera right 的单位向量, v 是单位向量 指向 Camera Up 是 w 指向视图方向相反的单位向量(因为我们使用 right-hand 坐标),并且 Camera center 位于原点。

 

图 20:摄像机视图向上方向


和以前一样,当我们的固定相机面对 Z 时,我们的任意视图相机面对 w 。注意事项 我们可以 — 但不必 — 使用 world up (0,1,0) 来指定 VUP。这很方便,而且 自然会使您的相机保持水平水平,直到您决定尝试 Crazy Camera 角度。

 
class camera {
  public:
    double aspect_ratio      = 1.0;  // Ratio of image width over height
    int    image_width       = 100;  // Rendered image width in pixel count
    int    samples_per_pixel = 10;   // Count of random samples for each pixel
    int    max_depth         = 10;   // Maximum number of ray bounces into scene

    double vfov     = 90;              // Vertical view angle (field of view)
point3 lookfrom = point3(0,0,0); // Point camera is looking from point3 lookat = point3(0,0,-1); // Point camera is looking at vec3 vup = vec3(0,1,0); // Camera-relative "up" direction
... private: int image_height; // Rendered image height double pixel_samples_scale; // Color scale factor for a sum of pixel samples point3 center; // Camera center point3 pixel00_loc; // Location of pixel 0, 0 vec3 pixel_delta_u; // Offset to pixel to the right vec3 pixel_delta_v; // Offset to pixel below
vec3 u, v, w; // Camera frame basis vectors
void initialize() { image_height = int(image_width / aspect_ratio); image_height = (image_height < 1) ? 1 : image_height; pixel_samples_scale = 1.0 / samples_per_pixel;
center = lookfrom;
// Determine viewport dimensions.
auto focal_length = (lookfrom - lookat).length();
auto theta = degrees_to_radians(vfov); auto h = std::tan(theta/2); auto viewport_height = 2 * h * focal_length; auto viewport_width = viewport_height * (double(image_width)/image_height);
// Calculate the u,v,w unit basis vectors for the camera coordinate frame. w = unit_vector(lookfrom - lookat); u = unit_vector(cross(vup, w)); v = cross(w, u);
// Calculate the vectors across the horizontal and down the vertical viewport edges.
vec3 viewport_u = viewport_width * u; // Vector across viewport horizontal edge vec3 viewport_v = viewport_height * -v; // Vector down viewport vertical edge
// Calculate the horizontal and vertical delta vectors from pixel to pixel. pixel_delta_u = viewport_u / image_width; pixel_delta_v = viewport_v / image_height; // Calculate the location of the upper left pixel.
auto viewport_upper_left = center - (focal_length * w) - viewport_u/2 - viewport_v/2;
pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v); } ... private: };

清单 82: [camera.h] 可定位和可定向的相机


我们将切换回之前的场景,并使用新的视点:

 
int main() {
    hittable_list world;

auto material_ground = make_shared<lambertian>(color(0.8, 0.8, 0.0)); auto material_center = make_shared<lambertian>(color(0.1, 0.2, 0.5)); auto material_left = make_shared<dielectric>(1.50); auto material_bubble = make_shared<dielectric>(1.00 / 1.50); auto material_right = make_shared<metal>(color(0.8, 0.6, 0.2), 1.0); world.add(make_shared<sphere>(point3( 0.0, -100.5, -1.0), 100.0, material_ground)); world.add(make_shared<sphere>(point3( 0.0, 0.0, -1.2), 0.5, material_center)); world.add(make_shared<sphere>(point3(-1.0, 0.0, -1.0), 0.5, material_left)); world.add(make_shared<sphere>(point3(-1.0, 0.0, -1.0), 0.4, material_bubble)); world.add(make_shared<sphere>(point3( 1.0, 0.0, -1.0), 0.5, material_right));
camera cam; cam.aspect_ratio = 16.0 / 9.0; cam.image_width = 400; cam.samples_per_pixel = 100; cam.max_depth = 50; cam.vfov = 90;
cam.lookfrom = point3(-2,2,1); cam.lookat = point3(0,0,-1); cam.vup = vec3(0,1,0);
cam.render(world); }

清单 83: [main.cc] 具有备用视点的场景

 获取:


图 20:远景


我们可以改变视野:

 
cam.vfov = 20;

清单 84: [main.cc] 更改视野

 获取:


图 21:放大

   

Defocus Blur


现在是我们的最后一个功能:散焦模糊。请注意,摄影师称之为景深,因此请确保仅在光线跟踪朋友中使用术语 defocus blur


我们在真实摄像机中使用散焦模糊的原因是,它们需要一个大孔(而不仅仅是一个针孔)来聚集光线。一个大孔会使所有东西散焦,但如果我们在胶片/传感器前面贴一个镜头,就会有一定的距离让所有东西都聚焦。放置在该距离上的对象将出现在焦点中,并且距离该距离越远,它们就会线性地显得越模糊。您可以这样看待镜头:焦距的特定点射出的所有光线(以及击中镜头的光线)都将弯曲图像传感器上的单个点。


我们将相机中心与一切都处于完美焦距的平面之间的距离称为焦距。请注意,焦距通常与焦距不同,焦距是相机中心和图像平面之间的距离。但是,对于我们的模型,这两者将具有相同的值,因为我们将像素网格放在焦平面上,即远离相机中心的焦距


在物理摄像机中,焦距由镜头与胶片/传感器之间的距离控制。这就是为什么当您更改对焦内容时,您会看到镜头相对于相机移动(这可能也发生在您的手机相机中,但传感器会移动)。“光圈”是一个有效控制镜头大小的孔。对于真正的相机,如果您需要更多的光线,您可以放大光圈,并且远离焦距的物体会变得更加模糊。对于我们的虚拟相机,我们可以拥有一个完美的传感器,并且永远不需要更多的光线,因此我们只在想要散焦模糊时使用光圈。

   


薄透镜近似


真正的相机有一个复杂的复合镜头。对于我们的代码,我们可以模拟顺序:传感器,然后镜头,然后光圈。然后我们可以弄清楚将光线发送到哪里,并在计算后翻转图像(图像倒置投射在胶片上)。然而,图形人员通常使用薄镜头近似值:

 

图 21:相机镜头型号


我们不需要模拟摄像机内部的任何部分,因为在摄像机外部渲染图像会带来不必要的复杂性。相反,我通常从一个无限薄的圆形“镜头”开始光线,并将它们发送到焦平面上(focal_length 个远离镜头的)上感兴趣的像素,在 3D 世界中,该平面上的一切都处于完美聚焦状态。


在实践中,我们通过将视区放置在此平面中来实现此目的。把所有东西放在一起:


  1. 焦平面与相机视图方向正交。

  2. 焦距是相机中心与焦平面之间的距离。

  3. 视区位于焦平面上,以摄像机视图方向矢量为中心。

  4. 像素位置的网格位于视区内(位于 3D 世界中)。

  5. 随机图像采样位置是从当前像素位置周围的区域中选择的。

  6. 照相机从镜头上的随机点通过当前图像采样位置发射光线。

 

图 22:相机焦平面

   

 生成采样光线


如果不进行散焦模糊,则所有场景光线都源自摄像机中心(或注视方)。为了实现散焦模糊,我们构建了一个以摄像机中心为中心的圆盘。半径越大,散焦模糊越严重。您可以将我们最初的摄像机视为具有半径为零的散焦圆盘(完全没有模糊),因此所有光线都源自圆盘中心 (lookfrom)。


那么,散焦盘应该有多大呢?由于此磁盘的大小控制我们获得的散焦模糊量,因此这应该是 Camera 类的一个参数。我们可以只将圆盘的半径作为摄像机参数,但模糊会根据投影距离而变化。一个稍微简单的参数是指定圆锥体的角度,顶点位于视口中心,基部(散焦圆盘)位于相机中心。当您改变给定镜头的焦距时,这应该会为您提供更一致的结果。


由于我们将从散焦磁盘中选择随机点,因此我们需要一个函数来执行此操作:random_in_unit_disk()。这个函数使用我们在 random_in_unit_sphere() 中使用的相同类型的方法工作,只是用于二维。

 
...

inline vec3 unit_vector(const vec3& u) {
    return v / v.length();
}

inline vec3 random_in_unit_disk() { while (true) { auto p = vec3(random_double(-1,1), random_double(-1,1), 0); if (p.length_squared() < 1) return p; } }
...

清单 85: [vec3.h] 在单元磁盘内生成随机点


现在,让我们更新相机以从散焦圆盘产生光线:

 
class camera {
  public:
    double aspect_ratio      = 1.0;  // Ratio of image width over height
    int    image_width       = 100;  // Rendered image width in pixel count
    int    samples_per_pixel = 10;   // Count of random samples for each pixel
    int    max_depth         = 10;   // Maximum number of ray bounces into scene

    double vfov     = 90;              // Vertical view angle (field of view)
    point3 lookfrom = point3(0,0,0);   // Point camera is looking from
    point3 lookat   = point3(0,0,-1);  // Point camera is looking at
    vec3   vup      = vec3(0,1,0);     // Camera-relative "up" direction

double defocus_angle = 0; // Variation angle of rays through each pixel double focus_dist = 10; // Distance from camera lookfrom point to plane of perfect focus
... private: int image_height; // Rendered image height double pixel_samples_scale; // Color scale factor for a sum of pixel samples point3 center; // Camera center point3 pixel00_loc; // Location of pixel 0, 0 vec3 pixel_delta_u; // Offset to pixel to the right vec3 pixel_delta_v; // Offset to pixel below vec3 u, v, w; // Camera frame basis vectors
vec3 defocus_disk_u; // Defocus disk horizontal radius vec3 defocus_disk_v; // Defocus disk vertical radius
void initialize() { image_height = int(image_width / aspect_ratio); image_height = (image_height < 1) ? 1 : image_height; pixel_samples_scale = 1.0 / samples_per_pixel; center = lookfrom; // Determine viewport dimensions.
auto focal_length = (lookfrom - lookat).length();
auto theta = degrees_to_radians(vfov); auto h = std::tan(theta/2);
auto viewport_height = 2 * h * focus_dist;
auto viewport_width = viewport_height * (double(image_width)/image_height); // Calculate the u,v,w unit basis vectors for the camera coordinate frame. w = unit_vector(lookfrom - lookat); u = unit_vector(cross(vup, w)); v = cross(w, u); // Calculate the vectors across the horizontal and down the vertical viewport edges. vec3 viewport_u = viewport_width * u; // Vector across viewport horizontal edge vec3 viewport_v = viewport_height * -v; // Vector down viewport vertical edge // Calculate the horizontal and vertical delta vectors to the next pixel. pixel_delta_u = viewport_u / image_width; pixel_delta_v = viewport_v / image_height; // Calculate the location of the upper left pixel.
auto viewport_upper_left = center - (focus_dist * w) - viewport_u/2 - viewport_v/2;
pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
// Calculate the camera defocus disk basis vectors. auto defocus_radius = focus_dist * std::tan(degrees_to_radians(defocus_angle / 2)); defocus_disk_u = u * defocus_radius; defocus_disk_v = v * defocus_radius;
} ray get_ray(int i, int j) const {
// Construct a camera ray originating from the defocus disk and directed at a randomly // sampled point around the pixel location i, j.
auto offset = sample_square(); auto pixel_sample = pixel00_loc + ((i + offset.x()) * pixel_delta_u) + ((j + offset.y()) * pixel_delta_v);
auto ray_origin = (defocus_angle <= 0) ? center : defocus_disk_sample();
auto ray_direction = pixel_sample - ray_origin; return ray(ray_origin, ray_direction); } vec3 sample_square() const { ... }
point3 defocus_disk_sample() const { // Returns a random point in the camera defocus disk. auto p = random_in_unit_disk(); return center + (p[0] * defocus_disk_u) + (p[1] * defocus_disk_v); }
color ray_color(const ray& r, int depth, const hittable& world) const { ... } };

清单 86: [camera.h] 具有可调景深的相机

 使用大光圈:

 
int main() {
    ...

    camera cam;

    cam.aspect_ratio      = 16.0 / 9.0;
    cam.image_width       = 400;
    cam.samples_per_pixel = 100;
    cam.max_depth         = 50;

    cam.vfov     = 20;
    cam.lookfrom = point3(-2,2,1);
    cam.lookat   = point3(0,0,-1);
    cam.vup      = vec3(0,1,0);

cam.defocus_angle = 10.0; cam.focus_dist = 3.4;
cam.render(world); }

清单 87: [main.cc] 具有景深的场景相机

 我们得到:


图 22:具有景深的球体

   

 下一步在哪里?

   

 最终渲染


让我们制作这本书封面上的图像 — 很多随机的球体。

 
int main() {
    hittable_list world;

auto ground_material = make_shared<lambertian>(color(0.5, 0.5, 0.5)); world.add(make_shared<sphere>(point3(0,-1000,0), 1000, ground_material)); for (int a = -11; a < 11; a++) { for (int b = -11; b < 11; b++) { auto choose_mat = random_double(); point3 center(a + 0.9*random_double(), 0.2, b + 0.9*random_double()); if ((center - point3(4, 0.2, 0)).length() > 0.9) { shared_ptr<material> sphere_material; if (choose_mat < 0.8) { // diffuse auto albedo = color::random() * color::random(); sphere_material = make_shared<lambertian>(albedo); world.add(make_shared<sphere>(center, 0.2, sphere_material)); } else if (choose_mat < 0.95) { // metal auto albedo = color::random(0.5, 1); auto fuzz = random_double(0, 0.5); sphere_material = make_shared<metal>(albedo, fuzz); world.add(make_shared<sphere>(center, 0.2, sphere_material)); } else { // glass sphere_material = make_shared<dielectric>(1.5); world.add(make_shared<sphere>(center, 0.2, sphere_material)); } } } } auto material1 = make_shared<dielectric>(1.5); world.add(make_shared<sphere>(point3(0, 1, 0), 1.0, material1)); auto material2 = make_shared<lambertian>(color(0.4, 0.2, 0.1)); world.add(make_shared<sphere>(point3(-4, 1, 0), 1.0, material2)); auto material3 = make_shared<metal>(color(0.7, 0.6, 0.5), 0.0); world.add(make_shared<sphere>(point3(4, 1, 0), 1.0, material3));
camera cam;
cam.aspect_ratio = 16.0 / 9.0; cam.image_width = 1200; cam.samples_per_pixel = 500; cam.max_depth = 50; cam.vfov = 20; cam.lookfrom = point3(13,2,3); cam.lookat = point3(0,0,0); cam.vup = vec3(0,1,0); cam.defocus_angle = 0.6; cam.focus_dist = 10.0;
cam.render(world); }

清单 88: [main.cc] 最终场景


(请注意,上面的代码与项目示例代码略有不同:samples_per_pixel设置为上面的 500,以获得需要相当长时间才能渲染的高质量图像。项目源代码使用值 10,以便在开发和验证时实现合理的运行时间。

 这给出了:


图 23:最后一幕


您可能会注意到的一件有趣的事情是,玻璃球并没有真正的阴影,这使得它们看起来像是漂浮的。这不是一个错误——你在现实生活中很少看到玻璃球,它们看起来也有点奇怪,而且确实似乎在阴天漂浮。玻璃球下方的大球体上的一个点仍然有很多光线照射到它,因为天空是重新排序的,而不是被阻挡的。

   

 后续步骤


您现在拥有了一个很酷的光线跟踪器!下一步是什么?

   


第 2 册:光线追踪:下周


本系列的第二本书基于您在此处开发的光线追踪器。这包括以下新功能:

   


第 3 册:光线追踪:您的余生


本书再次扩展了第二本书的内容。本书的大部分内容都是关于提高渲染图像质量和渲染器性能的,并侧重于生成正确的光线并适当地积累它们。


本书适用于对编写专业级光线追踪器非常感兴趣的读者,和/或对实现次表面散射或嵌套电介质等高级效果的基础感兴趣的读者。

   

 其他方向


从这里开始,你可以采取许多其他方向,包括我们在本系列中尚未介绍的技术。这些包括:


三角形 — 大多数很酷的模型都是三角形形式。模型 I/O 是最糟糕的,几乎每个人都试图让别人的代码来做这件事。这还包括有效处理大型三角形网格,这本身就存在挑战。


Parallelism — 在具有不同随机种子的内核上运行 N N 代码副本。平均 N 运行。这种平均也可以按层次结构完成,其中 N/2 对可以平均得到 N/4 图像,并且这些图像的对数可以被平均。这种并行方法应该很好地扩展到 数千个内核,编码非常少。


Shadow Rays (阴影光线) - 向光源发射光线时,您可以准确确定特定点的阴影效果。有了这个,您可以渲染清晰或柔和的阴影,为您的场景增加另一部分真实感。


玩得开心,请把你的酷照片发给我!

   

 确认

 原始手稿帮助

 Web 发布


更正和改进

 特别鸣谢


感谢 Limnu 团队在数字方面的帮助。


这些书完全是在 Morgan McGuire 出色且免费的 Markdeep 图书馆中编写的。要查看它是什么样子的,请从浏览器查看页面源代码。


感谢 Helen 胡 慷慨地将她的 https://github.com/RayTracing/ GitHub 组织捐赠给这个项目。

   

Citing This Book

Consistent citations make it easier to identify the source, location and versions of this work. If you are citing this book, we ask that you try to use one of the following forms if possible.

   

Basic Data

   

Snippets

   

Markdown

[_Ray Tracing in One Weekend_](https://raytracing.github.io/books/RayTracingInOneWeekend.html)
   

HTML

<a href="https://raytracing.github.io/books/RayTracingInOneWeekend.html">
    <cite>Ray Tracing in One Weekend</cite>
</a>
   

LaTeX and BibTex

~\cite{Shirley2024RTW1}

@misc{Shirley2024RTW1,
   title = {Ray Tracing in One Weekend},
   author = {Peter Shirley, Trevor David Black, Steve Hollasch},
   year = {2024},
   month = {August},
   note = {\small \texttt{https://raytracing.github.io/books/RayTracingInOneWeekend.html}},
   url = {https://raytracing.github.io/books/RayTracingInOneWeekend.html}
}
   

BibLaTeX

\usepackage{biblatex}

~\cite{Shirley2024RTW1}

@online{Shirley2024RTW1,
   title = {Ray Tracing in One Weekend},
   author = {Peter Shirley, Trevor David Black, Steve Hollasch},
   year = {2024},
   month = {August},
   url = {https://raytracing.github.io/books/RayTracingInOneWeekend.html}
}
   

IEEE

“Ray Tracing in One Weekend.” raytracing.github.io/books/RayTracingInOneWeekend.html
(accessed MMM. DD, YYYY)
   

MLA:

Ray Tracing in One Weekend. raytracing.github.io/books/RayTracingInOneWeekend.html
Accessed DD MMM. YYYY.

formatted by Markdeep 1.17