这是用户在 2025-6-9 13:29 为 https://catlikecoding.com/unity/tutorials/rendering/part-2/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Catlike Coding
published 2016-03-29

Rendering 2

Shader Fundamentals

  • Transform vertices.
  • Color pixels.
  • Use shader properties.
  • Pass data from vertices to fragments.
  • Inspect compiled shader code.
  • Sample a texture, with tiling and offset.

This is the second part of a tutorial series about rendering. The first part was about matrices. This time we'll write our first shader and import a texture.
这是渲染教程系列的第二部分。第一部分讲了矩阵。这次我们将编写第一个着色器并导入纹理。

This tutorials was made using Unity 5.4.0b10.
本教程使用 Unity 5.4.0b10 制作。

Texturing a sphere.  纹理化球体。

Default Scene  默认场景

When you create a new scene in Unity, you start with a default camera and directional light. Create a simple sphere via GameObject / 3D Object / Sphere, put it at the origin, and place the camera just in front of it.
在 Unity 中创建一个新场景时,你会得到一个默认的摄像机和一个平行光。通过 GameObject / 3D Object / Sphere 创建一个简单的球体,将其放置在原点,并将摄像机放置在它正前方。

Default sphere in default scene.
默认场景中的默认球体。

This is a very simple scene, yet there is already a lot of complex rendering going on. To get a good grip on the rendering process, it helps to get rid of all the fancy stuff and fist concern us with the fundamentals only.
这是一个非常简单的场景,但其中已经包含了许多复杂的渲染过程。为了更好地理解渲染过程,最好抛开所有花哨的东西,首先只关注基础。

Stripping It Down  精简它

Have a look at the lighting settings for the scene, via Window / Lighting. This will summon a lighting window with three tabs. We're only interested in the Scene tab, which is active by default.
通过“窗口/光照”菜单查看场景的光照设置。这将弹出一个光照窗口,其中包含三个选项卡。我们只关注默认激活的“场景”选项卡。

Default lighting settings.
默认灯光设置。

There is a section about environmental lighting, where you can select a skybox. This skybox is currently used for the scene background, for ambient lighting, and for reflections. Set it to none so it is switched off.
有一个关于环境光照的部分,你可以在其中选择一个天空盒。这个天空盒目前用于场景背景、环境光照和反射。将其设置为“无”以将其关闭。

While you're at it, you can also switch off the precomputed and real-time global illumination panels. We're not going to use those anytime soon.
趁此机会,你还可以关闭预计算和实时全局照明面板。我们暂时不会用到这些。

No more skybox.  不再有天空盒。

Without a skybox, the ambient source automatically switches to a solid color. The default color is dark gray with a very slight blue tint. Reflections become solid black, as indicated by a warning box.
没有天空盒,环境光源自动切换为纯色。默认颜色是深灰色,带有一点点蓝色。反射变为纯黑色,如警告框所示。

As you might expect, the sphere has become darker and the background is now a solid color. However, the background is dark blue. Where does that color come from?
正如你所预料的,球体变得更暗,背景现在是纯色。然而,背景是深蓝色。那个颜色从何而来?

Simplified lighting.  简化照明。

The background color is defined per camera. It renders the skybox by default, but it too falls back to a solid color.
背景颜色是根据每个摄像机定义的。它默认渲染天空盒,但如果天空盒不可用,它也会回退到纯色。

Default camera settings.
默认摄像机设置。

To further simplify the rendering, deactivate the directional light object, or delete it. This will get rid of the direct lighting in the scene, as well as the shadows that would be cast by it. What's left is the solid background, with the silhouette of the sphere in the ambient color.
为了进一步简化渲染,请停用或删除定向光对象。这将消除场景中的直接照明,以及由此产生的阴影。剩下的是纯色背景,以及球体以环境色呈现的剪影。

In the dark.  在黑暗中。
unitypackage

From Object to Image
从物体到图像

Our very simple scene is drawn in two steps. First, the image is filled with the background color of the camera. Then our sphere's silhouette is drawn on top of that.
我们极其简单的场景通过两个步骤绘制而成。首先,图像会填充相机的背景颜色。然后,球体的轮廓会绘制在其之上。

How does Unity know that it has to draw a sphere? We have a sphere object, and this object has a mesh renderer component. If this object lies inside the camera's view, it should be rendered. Unity verifies this by checking whether the object's bounding box intersects the camera's view frustum.
Unity 如何知道它必须绘制一个球体?我们有一个球体对象,并且这个对象有一个网格渲染器组件。如果这个对象位于摄像机的视图内,它就应该被渲染。Unity 通过检查对象的包围盒是否与摄像机的视锥体相交来验证这一点。

Default sphere object.  默认球体对象。

The transform component is used to alter the position, orientation, and size of the mesh and bounding box. Actually, the entire transformation hierarchy is used, as described in part 1, Matrices. If the object ends up in the camera's view, it is scheduled for rendering.
变换组件用于更改网格和包围盒的位置、方向和大小。实际上,使用的是整个变换层级,如第一部分“矩阵”中所述。如果对象最终在摄像机视图中,则会安排其进行渲染。

Finally, the GPU is tasked with rendering the object's mesh. The specific rendering instructions are defined by the object's material. The material references a shader – which is a GPU program – plus any settings it might have.
最后,GPU 负责渲染对象的网格。具体的渲染指令由对象的材质定义。材质引用了一个着色器——这是一个 GPU 程序——以及它可能有的任何设置。

Who controls what.  谁控制什么。

Our object currently has the default material, which uses Unity's Standard shader. We're going to replace it with our own shader, which we'll build from the ground up.
我们的对象目前拥有默认材质,它使用的是 Unity 的标准着色器。我们将用我们自己的着色器来替换它,我们将从头开始构建这个着色器。

Your First Shader  你的第一个着色器

Create a new shader via Assets / Create / Shader / Unlit Shader and name it something like My First Shader.
通过 Assets / Create / Shader / Unlit Shader 创建一个新的着色器,并将其命名为例如“我的第一个着色器”。

Your first shader.  你的第一个着色器。

Open the shader file and delete its contents, so we can start from scratch.
打开着色器文件并删除其内容,以便我们从头开始。

A shader is defined with the Shader keyword. It is followed by a string that describes the shader menu item that you can use to select this shader. It doesn't need to match the file name. After that comes the block with the shader's contents.
着色器使用 Shader 关键字定义。后面跟着一个字符串,描述你可以用来选择此着色器的着色器菜单项。它无需与文件名匹配。之后是包含着色器内容的块。

Shader "Custom/My First Shader" {

}

Save the file. You will get a warning that the shader is not supported, because it has no sub-shaders or fallbacks. That's because it's empty.
保存文件。你会收到一个警告,提示着色器不受支持,因为它没有子着色器或回退。那是因为它是空的。

Although the shader is nonfunctional, we can already assign it to a material. So create a new material via Assets / Create / Material and select our shader from the shader menu.
尽管着色器不具备功能,我们已经可以将其赋给材质。因此,通过“Assets / Create / Material”创建一个新材质,并从着色器菜单中选择我们的着色器。

assets material
Material with your shader.
使用你的着色器渲染材质。

Change our sphere object so it uses our own material, instead of the default material. The sphere will become magenta. This happens because Unity will switch to an error shader, which uses this color to draw your attention to the problem.
更改我们的球体对象,使其使用我们自己的材质,而不是默认材质。球体将变成洋红色。这是因为 Unity 会切换到错误着色器,它使用这种颜色来吸引你注意问题。

object sphere
Material with your shader.
使用你的着色器渲染材质。

The shader error mentioned sub-shaders. You can use these to group multiple shader variants together. This allows you to provide different sub-shaders for different build platforms or levels of detail. For example, you could have one sub-shader for desktops and another for mobiles. We need just one sub-shader block.
着色器错误中提到了子着色器。您可以使用它们将多个着色器变体组合在一起。这允许您为不同的构建平台或细节级别提供不同的子着色器。例如,您可以为台式机提供一个子着色器,为移动设备提供另一个。我们只需要一个子着色器块。

Shader "Custom/My First Shader" {

	SubShader {
		
	}
}

The sub-shader has to contain at least one pass. A shader pass is where an object actually gets rendered. We'll use one pass, but it's possible to have more. Having more than one pass means that the object gets rendered multiple times, which is required for a lot of effects.
子着色器必须至少包含一个通道。着色器通道是实际渲染对象的地方。我们将使用一个通道,但也可以有更多。拥有多个通道意味着对象会多次渲染,这对于很多效果来说是必需的。

Shader "Custom/My First Shader" {

	SubShader {

		Pass {

		}
	}
}

Our sphere might now become white, as we're using the default behavior of an empty pass. If that happens, it means that we no longer have any shader errors. However, you might still see old errors in the console. They tend to stick around, not getting cleared when a shader recompiles without errors.
我们的球体现在可能会变白,因为我们使用的是空通道的默认行为。如果发生这种情况,这意味着我们不再有任何着色器错误。但是,您可能仍然会在控制台中看到旧错误。它们往往会一直存在,在着色器重新编译没有错误时也不会被清除。

A white sphere.  一个白色球体。

Shader Programs  Shader 程序

It is now time to write our own shader program. We do so with Unity's shading language, which is a variant of the HLSL and CG shading languages. We have to indicate the start of our code with the CGPROGRAM keyword. And we have to terminate with the ENDCG keyword.
现在是时候编写我们自己的着色器程序了。我们使用 Unity 的着色语言,它是 HLSL 和 CG 着色语言的一种变体。我们必须用 CGPROGRAM 关键字指示代码的开始。并且我们必须用 ENDCG 关键字终止。

		Pass {
			CGPROGRAM

			ENDCG
		}

The shader compiler is now complaining that our shader doesn't have vertex and fragment programs. Shaders consist of two programs each. The vertex program is responsible for processing the vertex data of a mesh. This includes the conversion from object space to display space, just like we did in part 1, Matrices. The fragment program is responsible for coloring individual pixels that lie inside the mesh's triangles.
着色器编译器现在抱怨我们的着色器没有顶点和片段程序。每个着色器都包含两个程序。顶点程序负责处理网格的顶点数据。这包括从对象空间到显示空间的转换,就像我们在第一部分《矩阵》中做的那样。片段程序负责为网格三角形内的单个像素着色。

Vertex and fragment program.
顶点和片段程序。

We have to tell the compiler which programs to use, via pragma directives.
我们必须通过 pragma 指令告诉编译器要使用哪些程序。

			CGPROGRAM

			#pragma vertex MyVertexProgram
			#pragma fragment MyFragmentProgram

			ENDCG

The compiler again complains, this time because it cannot find the programs that we specified. That's because we haven't defined them yet.
编译器再次报错,这次是因为找不到我们指定的程序。这是因为我们还没有定义它们。

The vertex and fragment programs are written as methods, quite like in C#, though they're typically referred to as functions. Let's simply create two empty void methods with the appropriate names.
顶点和片元程序是以方法的形式编写的,很像 C#,尽管它们通常被称为函数。我们只需创建两个具有适当名称的空 void 方法。

			CGPROGRAM

			#pragma vertex MyVertexProgram
			#pragma fragment MyFragmentProgram

			void MyVertexProgram () {

			}

			void MyFragmentProgram () {

			}

			ENDCG

At this point the shader will compile, and the sphere will disappear. Or you will still get errors. It depends on which rendering platform your editor is using. If you're using Direct3D 9, you'll probably get errors.
此时,着色器将编译,球体将消失。或者您仍然会收到错误。这取决于您的编辑器正在使用哪个渲染平台。如果您使用的是 Direct3D 9,您可能会收到错误。

Shader Compilation  着色器编译

Unity's shader compiler takes our code and transforms it into a different program, depending on the target platform. Different platforms require different solutions. For example, Direct3D for Windows, OpenGL for Macs, OpenGL ES for mobiles, and so on. We're not dealing with a single compiler here, but multiple.
Unity 的着色器编译器会根据目标平台,将我们的代码转换成不同的程序。不同的平台需要不同的解决方案。例如,Windows 需要 Direct3D,Mac 需要 OpenGL,移动设备需要 OpenGL ES,等等。我们在这里处理的不是一个编译器,而是多个编译器。

Which compiler you end up using depends on what you're targeting. And as these compilers are not identical, you can end up with different results per platform. For example, our empty programs work fine with OpenGL and Direct3D 11, but fail when targeting Direct3D 9.
最终使用哪个编译器取决于你的目标平台。由于这些编译器并非完全相同,因此不同平台可能会产生不同的结果。例如,我们的空程序在 OpenGL 和 Direct3D 11 上运行良好,但在针对 Direct3D 9 时却会失败。

Select the shader in the editor and look at the inspector window. It displays some information about the shader, including the current compiler errors. There is also a Compiled code entry with a Compile and show code button and a dropdown menu. If you click the button, Unity will compile the shader and open its output in your editor, so you can inspect the generated code.
在编辑器中选择着色器并查看检查器窗口。它会显示着色器的一些信息,包括当前的编译器错误。还有一个“编译代码”条目,带有一个“编译并显示代码”按钮和一个下拉菜单。如果单击该按钮,Unity 将编译着色器并在编辑器中打开其输出,以便您可以检查生成的代码。

Shader inspector, with errors for all platforms.
Shader 检测器,所有平台均报错。

You can select which platforms you manually compile the shader for, via the dropdown menu. The default is to compile for the graphics device that's used by your editor. You can manually compile for other platforms as well, either your current build platform, all platforms you have licenses for, or a custom selection. This enables you to quickly make sure that your shader compiles on multiple platforms, without having to make complete builds.
您可以通过下拉菜单选择要为其手动编译着色器的平台。默认是为您的编辑器使用的图形设备进行编译。您也可以手动为其他平台编译,无论是您当前的构建平台、您拥有许可的所有平台,还是自定义选择。这使您您能够快速确保您的着色器在多个平台上编译,而无需进行完整的构建。

Selecting OpenGLCore.  选择 OpenGLCore。

To compile the selected programs, close the pop-up and click the Compile and show code button. Clicking the little Show button inside the pop-up will show you the used shader variants, which is not useful right now.
要编译所选程序,请关闭弹窗,然后点击“编译并显示代码”按钮。点击弹窗内的小“显示”按钮,会显示使用的着色器变体,这目前没什么用。

For example, here is the resulting code when our shader is compiled for OpenGlCore.
例如,当我们的着色器为 OpenGLCore 编译时,这是生成的代码。

// Compiled shader for custom platforms, uncompressed size: 0.5KB

// Skipping shader variants that would not be included into build of current scene.

Shader "Custom/My First Shader" {
SubShader { 
 Pass {
  GpuProgramID 16807
Program "vp" {
SubProgram "glcore " {
"#ifdef VERTEX
#version 150
#extension GL_ARB_explicit_attrib_location : require
#extension GL_ARB_shader_bit_encoding : enable
void main()
{
    return;
}
#endif
#ifdef FRAGMENT
#version 150
#extension GL_ARB_explicit_attrib_location : require
#extension GL_ARB_shader_bit_encoding : enable
void main()
{
    return;
}
#endif
"
}
}
Program "fp" {
SubProgram "glcore " {
"// shader disassembly not supported on glcore"
}
}
 }
}
}

The generated code is split into two blocks, vp and fp, for the vertex and fragment programs. However, in the case of OpenGL both programs end up in the vp block. The two main functions correspond two our empty methods. So let's focus on those and ignore the other code.
生成的代码分为两块,vp 和 fp,分别对应顶点程序和片段程序。然而,在 OpenGL 的情况下,这两个程序最终都位于 vp 块中。这两个主要函数对应于我们的空方法。所以我们只关注这些,忽略其他代码。

#ifdef VERTEX
void main()
{
    return;
}
#endif
#ifdef FRAGMENT
void main()
{
    return;
}
#endif

And here is the generated code for Direct3D 11, stripped down to the interesting parts. It looks quite different, but it's obvious that the code doesn't do much.
以下是为 Direct3D 11 生成的代码,仅保留了核心部分。它看起来大相径庭,但显然这段代码并没有做太多事情。

Program "vp" {
SubProgram "d3d11 " {
      vs_4_0
   0: ret 
}
}
Program "fp" {
SubProgram "d3d11 " {
      ps_4_0
   0: ret 
}
}

As we work on our programs, I will often show the compiled code for OpenGLCore and D3D11, so you can get an idea of what's happening under the hood.
在我们的程序开发过程中,我将经常展示 OpenGLCore 和 D3D11 的编译代码,以便你了解底层发生了什么。

Including Other Files  包含其他文件

To produce a functional shader you need a lot of boilerplate code. Code that defines common variables, functions, and other things. Were this a C# program, we'd put that code in other classes. But shaders don't have classes. They're just one big file with all the code, without the grouping provided by classes or namespaces.
要制作一个功能齐全的着色器,你需要大量的样板代码。这些代码定义了常用变量、函数以及其他内容。如果这是一个 C# 程序,我们会将这些代码放在其他类中。但是着色器没有类。它们只是一个包含所有代码的巨大文件,没有类或命名空间提供的分组。

Fortunately, we can split the code into multiple files. You can use the #include directive to load a different file's contents into the current file. A typical file to include is UnityCG.cginc, so let's do that.
幸运的是,我们可以将代码拆分为多个文件。你可以使用 #include 指令将其他文件的内容加载到当前文件中。一个典型的要包含的文件是 UnityCG.cginc,所以我们来做一下。

			CGPROGRAM

			#pragma vertex MyVertexProgram
			#pragma fragment MyFragmentProgram

			#include "UnityCG.cginc"

			void MyVertexProgram () {

			}

			void MyFragmentProgram () {

			}

			ENDCG

UnityCG.cginc is one of the shader include files that are bundled with Unity. It includes a few other essential files, and contains some generic functionality.
UnityCG.cginc 是 Unity 捆绑的着色器包含文件之一。它包含其他一些基本文件,并提供了一些通用功能。

Include file hierarchy, starting at UnityCG.
包括文件层级,从 UnityCG 开始。

UnityShaderVariables.cginc defines a whole bunch of shader variables that are necessary for rendering, like transformation, camera, and light data. These are all set by Unity when needed.
UnityShaderVariables.cginc 定义了许多渲染必需的着色器变量,例如变换、摄像机和灯光数据。这些变量在需要时都由 Unity 设置。

HLSLSupport.cginc sets things up so you can use the same code no matter which platform you're targeting. So you don't need to worry about using platform-specific data types and such.
HLSLSupport.cginc 的作用是让你可以用同一套代码,不管你面向哪个平台。这样你就不用担心使用特定平台的数据类型等等。

UnityInstancing.cginc is specifically for instancing support, which is a specific rendering technique to reduce draw calls. Although it doesn't include the file directly, it depends on UnityShaderVariables.
UnityInstancing.cginc 是专门用于实例化支持的,这是一种特殊的渲染技术,可以减少绘制调用。虽然它没有直接包含文件,但它依赖于 UnityShaderVariables。

Note that the contents of these files are effectively copied into your own file, replacing the including directive. This happens during a pre-processing step, which carries out all the pre-processing directives. Those directives are all statements that start with a hash, like #include and #pragma. After that step is finished, the code is processed again, and it is actually compiled.
请注意,这些文件的内容实际上是复制到你自己的文件中,替换了包含指令。这发生在预处理步骤中,该步骤执行所有预处理指令。这些指令都是以哈希符号开头的语句,比如 #include#pragma 。该步骤完成后,代码将再次被处理,并实际进行编译。

Producing Output  生成输出

To render something, our shader programs have to produce results. The vertex program has to return the final coordinates of a vertex. How many coordinates? Four, because we're using 4 by 4 transformation matrices, as described in part 1, Matrices.
要渲染某些东西,我们的着色器程序必须产生结果。顶点程序必须返回顶点的最终坐标。多少个坐标?四个,因为我们使用的是 4x4 变换矩阵,如《矩阵》第 1 部分所述。

Change the function's type from void to float4. A float4 is simply a collection of four floating-point numbers. Just return 0 for now.
将函数的类型从 void 更改为 float4float4 只是四个浮点数的集合。现在只返回 0。

			float4 MyVertexProgram () {
				return 0;
			}

We're now getting an error about missing semantics. The compiler sees that we're returning a collection of four floats, but it doesn't know what that data represents. So it doesn't know what the GPU should do with it. We have to be very specific about the output of our program.
我们现在遇到了缺少语义的错误。编译器发现我们返回的是一个包含四个浮点数的集合,但它不知道这些数据代表什么。因此它不知道 GPU 应该如何处理这些数据。我们必须非常具体地说明程序的输出。

In this case, we're trying to output the position of the vertex. We have to indicate this by attaching the SV_POSITION semantic to our method. SV stands for system value, and POSITION for the final vertex position.
在这种情况下,我们试图输出顶点的位置。我们必须通过将 SV_POSITION 语义附加到我们的方法来指示这一点。SV 代表系统值,POSITION 代表最终的顶点位置。

			float4 MyVertexProgram () : SV_POSITION {
				return 0;
			}

The fragment program is supposed to output an RGBA color value for one pixel. We can use a float4 for that as well. Returning 0 will produce solid back.
片段程序应该输出一个像素的 RGBA 颜色值。我们也可以为此使用 SV_Target。返回 0 将产生纯黑色。

			float4 MyFragmentProgram () {
				return 0;
			}

The fragment program requires semantics as well. In this case, we have to indicate where the final color should be written to. We use SV_TARGET, which is the default shader target. This is the frame buffer, which contains the image that we are generating.
片段程序也需要语义。在这种情况下,我们必须指明最终颜色应写入何处。我们使用 SV_TARGET ,这是默认的着色器目标。它就是帧缓冲区,其中包含我们正在生成的图像。

			float4 MyFragmentProgram () : SV_TARGET {
				return 0;
			}

But wait, the output of the vertex program is used as input for the fragment program. This suggests that the fragment program should get a parameter that matches the vertex program's output.
但等等,顶点程序的输出被用作片段程序的输入。这表明片段程序应该获得一个与顶点程序输出匹配的参数。

			float4 MyFragmentProgram (float4 position) : SV_TARGET {
				return 0;
			}

It doesn't matter what name we give to the parameter, but we have to make sure to use the correct semantic.
我们给参数起什么名字不重要,但我们必须确保使用正确的语义。

			float4 MyFragmentProgram (
				float4 position : SV_POSITION
			) : SV_TARGET {
				return 0;
			}

Our shader once again compiles without errors, but the sphere has disappeared. This shouldn't be surprising, because we collapse all its vertices to a single point.
我们的着色器再次编译成功,没有错误,但球体消失了。这并不令人惊讶,因为我们将其所有顶点都收缩到一个点。

If you look at the compiled OpenGLCore programs, you'll see that they now write to output values. And our single values have indeed been replaced with four-component vectors.
如果看一下编译后的 OpenGLCore 程序,你会发现它们现在会写入输出值。而我们原来的单值确实已经被四分量向量取代了。

#ifdef VERTEX
void main()
{
    gl_Position = vec4(0.0, 0.0, 0.0, 0.0);
    return;
}
#endif
#ifdef FRAGMENT
layout(location = 0) out vec4 SV_TARGET0;
void main()
{
    SV_TARGET0 = vec4(0.0, 0.0, 0.0, 0.0);
    return;
}
#endif

The same is true for the D3D11 programs, although the syntax is different.
D3D11 程序也是如此,尽管语法不同。

Program "vp" {
SubProgram "d3d11 " {
      vs_4_0
      dcl_output_siv o0.xyzw, position
   0: mov o0.xyzw, l(0,0,0,0)
   1: ret 
}
}
Program "fp" {
SubProgram "d3d11 " {
      ps_4_0
      dcl_output o0.xyzw
   0: mov o0.xyzw, l(0,0,0,0)
   1: ret 
}
}

Transforming Vertices  变换顶点

To get our sphere back, our vertex program has to produce a correct vertex position. To do so, we need to know the object-space position of the vertex. We can access it by adding a variable with the POSITION semantic to our function. The position will then be provided as homogeneous coordinates of the form [xyz1], so its type is float4.
为了找回我们的球体,顶点程序必须生成一个正确的顶点位置。为此,我们需要知道顶点在对象空间中的位置。我们可以通过向函数添加一个带有 POSITION 语义的变量来访问它。然后,该位置将以 [xyz1] 形式的齐次坐标提供,因此其类型为 float4

			float4 MyVertexProgram (float4 position : POSITION) : SV_POSITION {
				return 0;
			}

Let's start by directly returning this position.
我们首先直接返回这个位置。

			float4 MyVertexProgram (float4 position : POSITION) : SV_POSITION {
				return position;
			}

The compiled vertex programs will now have a vertex input and copy it to their output.
编译后的顶点程序现在将具有一个顶点输入,并将其复制到输出。

in  vec4 in_POSITION0;
void main()
{
    gl_Position = in_POSITION0;
    return;
}
Bind "vertex" Vertex
      vs_4_0
      dcl_input v0.xyzw
      dcl_output_siv o0.xyzw, position
   0: mov o0.xyzw, v0.xyzw
   1: ret
Raw vertex positions.  原始顶点位置。

A black sphere will become visible, but it will be distorted. That's because we're using the object-space positions as if they were display positions. As such, moving the sphere around will make no difference, visually.
一个黑色的球体将会变得可见,但它会发生扭曲。那是因为我们正在将对象空间位置视为显示位置。因此,移动球体在视觉上不会有任何差异。

We have to multiply the raw vertex position with the model-view-projection matrix. This matrix combines the object's transform hierarchy with the camera transformation and projection, like we did in part 1, Matrices.
我们必须将原始顶点位置与模型-视图-投影矩阵相乘。这个矩阵将对象的变换层级与摄像机变换和投影结合在一起,就像我们在第一部分“矩阵”中所做的那样。

The 4 by 4 MVP matrix is defined in UnityShaderVariables as UNITY_MATRIX_MVP. We can use the mul function to multiply it with the vertex position. This will correctly project our sphere onto the display. You can also move, rotate, and scale it and the image will change as expected.
4x4 的 MVP 矩阵在 UnityShaderVariables 中定义为 UNITY_MATRIX_MVP 。我们可以使用 mul 函数将其与顶点位置相乘。这将正确地将我们的球体投影到显示屏上。你也可以移动、旋转和缩放它,图像会按预期变化。

			float4 MyVertexProgram (float4 position : POSITION) : SV_POSITION {
				return mul(UNITY_MATRIX_MVP, position);
			}
Correctly positioned.  正确对齐。

If you check the OpenGLCore vertex program, you will notice that a lot of uniform variables have suddenly appeared. Even though they aren't used, and will be ignored, accessing the matrix triggered the compiler to include the whole bunch.
如果你查看 OpenGLCore 顶点程序,你会注意到突然出现了许多 Uniform 变量。尽管它们没有被使用,也将被忽略,但访问矩阵触发了编译器,使其包含了所有这些变量。

You will also see the matrix multiplication, encoded as a bunch of multiplications and additions.
你还会看到矩阵乘法,它被编码为一堆乘法和加法。

uniform 	vec4 _Time;
uniform 	vec4 _SinTime;
uniform 	vec4 _CosTime;
uniform 	vec4 unity_DeltaTime;
uniform 	vec3 _WorldSpaceCameraPos;
…
in  vec4 in_POSITION0;
vec4 t0;
void main()
{
    t0 = in_POSITION0.yyyy * glstate_matrix_mvp[1];
    t0 = glstate_matrix_mvp[0] * in_POSITION0.xxxx + t0;
    t0 = glstate_matrix_mvp[2] * in_POSITION0.zzzz + t0;
    gl_Position = glstate_matrix_mvp[3] * in_POSITION0.wwww + t0;
    return;
}

The D3D11 compiler doesn't bother with including unused variables. It encodes the matrix multiplication with a mul and three mad instructions. The mad instruction represents a multiplication followed by an addition.
D3D11 编译器不会包含未使用的变量。它使用一个 mul 和三个 mad 指令对矩阵乘法进行编码。mad 指令表示乘法后跟加法。

Bind "vertex" Vertex
ConstBuffer "UnityPerDraw" 352
Matrix 0 [glstate_matrix_mvp]
BindCB  "UnityPerDraw" 0
      vs_4_0
      dcl_constantbuffer cb0[4], immediateIndexed
      dcl_input v0.xyzw
      dcl_output_siv o0.xyzw, position
      dcl_temps 1
   0: mul r0.xyzw, v0.yyyy, cb0[1].xyzw
   1: mad r0.xyzw, cb0[0].xyzw, v0.xxxx, r0.xyzw
   2: mad r0.xyzw, cb0[2].xyzw, v0.zzzz, r0.xyzw
   3: mad o0.xyzw, cb0[3].xyzw, v0.wwww, r0.xyzw
   4: ret
unitypackage

Coloring Pixels  像素着色

Now that we got the shape right, let's add some color. The simplest is to use a constant color, for example yellow.
既然形状对了,我们来加点颜色。最简单的办法是使用一种恒定的颜色,例如黄色。

			float4 MyFragmentProgram (
				float4 position : SV_POSITION
			) : SV_TARGET {
				return float4(1, 1, 0, 1);
			}
Yellow sphere.  黄色球体。

Of course you don't always want yellow objects. Ideally, our shader would support any color. Then you could use the material to configure which color to apply. This is done via shader properties.
当然,你不会总是想要黄色的物体。理想情况下,我们的着色器应该支持任何颜色。这样你就可以使用材质来配置要应用的颜色。这通过着色器属性来实现。

Shader Properties  着色器属性

Shader properties are declared in a separate block. Add it at the top of the shader.
着色器属性在一个独立的块中声明。将其添加到着色器的顶部。

Shader "Custom/My First Shader" {

	Properties {
	}

	SubShader {
		…
	}
}

Put a property named _Tint inside the new block. You could give it any name, but the convention is to start with an underscore followed by a capital letter, and lowercase after that. The idea is that nothing else uses this convention, which prevents accidental duplicate names.
在新块中放置一个名为 _Tint 的属性。你可以给它任何名称,但惯例是以一个下划线后跟一个大写字母开头,之后是小写。这样做的目的是避免其他东西使用此惯例,从而防止意外的重复名称。

	Properties {
		_Tint
	}

The property name must be followed by a string and a type, in parenthesis, as if you're invoking a method. The string is used to label the property in the material inspector. In this case, the type is Color.
属性名后必须是一个字符串和一个类型,放在括号中,就像在调用一个方法一样。字符串用于在材质检查器中标记属性。在这种情况下,类型是 Color

	Properties {
		_Tint ("Tint", Color)
	}

The last part of the property declaration is the assignment of a default value. Let's set it to white.
属性声明的最后一部分是默认值的赋值。我们将其设置为白色。

	Properties {
		_Tint ("Tint", Color) = (1, 1, 1, 1)
	}

Our tint property should now show up in the properties section of our shader's inspector.
我们的色调属性现在应该显示在着色器检查器的属性部分。

Shader Properties.  着色器属性。

When you select your material, you will see the new Tint property, set to white. You can change it to any color you like, for example green.
当你选择你的材质时,你会看到新的色调属性,设置为白色。你可以将其更改为任何你喜欢的颜色,例如绿色。

Material Properties.  材质属性。

Accessing Properties  访问属性

To actually use the property, we have to add a variable to the shader code. Its name has to exactly match the property name, so it'll be _Tint. We can then simply return that variable in our fragment program.
要实际使用该属性,我们必须在着色器代码中添加一个变量。其名称必须与属性名称完全匹配,因此是 _Tint 。然后,我们只需在片元程序中返回该变量即可。

			#include "UnityCG.cginc"

			float4 _Tint;

			float4 MyVertexProgram (float4 position : POSITION) : SV_POSITION {
				return mul(UNITY_MATRIX_MVP, position);
			}

			float4 MyFragmentProgram (
				float4 position : SV_POSITION
			) : SV_TARGET {
				return _Tint;
			}

Note that the variable has to be defined before it can be used. While you could change the order of fields and methods in a C# class without issues, this is not true for shaders. The compiler works from top to bottom. It will not look ahead.
请注意,变量必须先定义才能使用。虽然您可以在 C# 类中更改字段和方法的顺序而不会出现问题,但着色器并非如此。编译器从上到下工作。它不会向前看。

The compiled fragment programs now include the tint variable.
现在,编译后的片元程序包含 tint 变量。

uniform 	vec4 _Time;
uniform 	vec4 _SinTime;
uniform 	vec4 _CosTime;
uniform 	vec4 unity_DeltaTime;
uniform 	vec3 _WorldSpaceCameraPos;
…
uniform 	vec4 _Tint;
layout(location = 0) out vec4 SV_TARGET0;
void main()
{
    SV_TARGET0 = _Tint;
    return;
}
ConstBuffer "$Globals" 112
Vector 96 [_Tint]
BindCB  "$Globals" 0
      ps_4_0
      dcl_constantbuffer cb0[7], immediateIndexed
      dcl_output o0.xyzw
   0: mov o0.xyzw, cb0[6].xyzw
   1: ret
Green sphere.  绿球。

From Vertex To Fragment
从顶点到片段

So far we've given all pixels the same color, but that is quite limiting. Usually, vertex data plays a big role. For example, we could interpret the position as a color. However, the transformed position isn't very useful. So let's instead use the local position in the mesh as a color. How do we pass that extra data from the vertex program to the fragment program?
到目前为止,我们为所有像素指定了相同的颜色,但这限制性很大。通常,顶点数据扮演着重要角色。例如,我们可以将位置解释为颜色。然而,转换后的位置并不是很有用。因此,我们不妨将网格中的局部位置用作颜色。我们如何将这些额外数据从顶点程序传递到片段程序呢?

The GPU creates images by rasterizing triangles. It takes three processed vertices and interpolates between them. For every pixel covered by the triangle, it invokes the fragment program, passing along the interpolated data.
GPU 通过光栅化三角形来创建图像。它接收三个经过处理的顶点,并在它们之间进行插值。对于三角形覆盖的每个像素,它都会调用片段程序,并传入插值后的数据。

Interpolating vertex data.
插值顶点数据。

So the output of the vertex program isn't directly used as input for the fragment program at all. The interpolation process sits in between. Here the SV_POSITION data gets interpolated, but other things can be interpolated as well.
因此,顶点程序的输出并不会直接用作片元程序的输入。插值过程介于两者之间。这里插值的是 SV_POSITION 数据,但其他数据也可以进行插值。

To access the interpolated local position, add a parameter to the fragment program. As we only need the X, Y, and Z components, we can suffice with a float3. We can then output the position as if it were a color. We do have to provide the fourth color component, which can simply remain 1.
要访问插值后的局部位置,需向片元程序添加一个参数。由于我们只需要 X、Y 和 Z 分量,因此一个 float3 就足够了。然后我们可以将位置作为颜色输出。我们确实需要提供第四个颜色分量,它可以简单地保持为 1。

			float4 MyFragmentProgram (
				float4 position : SV_POSITION,
				float3 localPosition
			) : SV_TARGET {
				return float4(localPosition, 1);
			}

Once again we have to use semantics to tell the compiler how to interpret this data. We'll use TEXCOORD0.
我们再次必须使用语义来告诉编译器如何解释这些数据。我们将使用 TEXCOORD0

			float4 MyFragmentProgram (
				float4 position : SV_POSITION,
				float3 localPosition : TEXCOORD0
			) : SV_TARGET {
				return float4(localPosition, 1);
			}

The compiled fragment shaders will now use the interpolated data instead of the uniform tint.
编译后的片段着色器现在将使用插值数据而不是统一的颜色。

in  vec3 vs_TEXCOORD0;
layout(location = 0) out vec4 SV_TARGET0;
void main()
{
    SV_TARGET0.xyz = vs_TEXCOORD0.xyz;
    SV_TARGET0.w = 1.0;
    return;
}
     ps_4_0
      dcl_input_ps linear v0.xyz
      dcl_output o0.xyzw
   0: mov o0.xyz, v0.xyzx
   1: mov o0.w, l(1.000000)
   2: ret

Of course the vertex program has to output the local position for this to work. We can do that by adding an output parameter to it, with the same TEXCOORD0 semantic. The parameter names of the vertex and fragment functions do not need to match. It's all about the semantics.
当然,顶点程序必须输出局部位置才能使其正常工作。我们可以通过向其添加一个输出参数来做到这一点,该参数具有相同的 TEXCOORD0 语义。顶点和片段函数的参数名称不需要匹配。一切都与语义有关。

			float4 MyVertexProgram (
				float4 position : POSITION,
				out float3 localPosition : TEXCOORD0
			) : SV_POSITION {
				return mul(UNITY_MATRIX_MVP, position);
			}

To pass the data through the vertex program, copy the X, Y, and Z components from position to localPosition.
要通过顶点程序传递数据,请将 X、Y 和 Z 分量从 position 复制到 localPosition

			float4 MyVertexProgram (
				float4 position : POSITION,
				out float3 localPosition : TEXCOORD0
			) : SV_POSITION {
				localPosition = position.xyz;
				return mul(UNITY_MATRIX_MVP, position);
			}

The extra vertex program output gets included in the compiler shaders, and we'll see our sphere get colorized.
额外的顶点程序输出被包含在编译器着色器中,我们将看到我们的球体被着色。

in  vec4 in_POSITION0;
out vec3 vs_TEXCOORD0;
vec4 t0;
void main()
{
    t0 = in_POSITION0.yyyy * glstate_matrix_mvp[1];
    t0 = glstate_matrix_mvp[0] * in_POSITION0.xxxx + t0;
    t0 = glstate_matrix_mvp[2] * in_POSITION0.zzzz + t0;
    gl_Position = glstate_matrix_mvp[3] * in_POSITION0.wwww + t0;
    vs_TEXCOORD0.xyz = in_POSITION0.xyz;
    return;
}
Bind "vertex" Vertex
ConstBuffer "UnityPerDraw" 352
Matrix 0 [glstate_matrix_mvp]
BindCB  "UnityPerDraw" 0
      vs_4_0
      dcl_constantbuffer cb0[4], immediateIndexed
      dcl_input v0.xyzw
      dcl_output_siv o0.xyzw, position
      dcl_output o1.xyz
      dcl_temps 1
   0: mul r0.xyzw, v0.yyyy, cb0[1].xyzw
   1: mad r0.xyzw, cb0[0].xyzw, v0.xxxx, r0.xyzw
   2: mad r0.xyzw, cb0[2].xyzw, v0.zzzz, r0.xyzw
   3: mad o0.xyzw, cb0[3].xyzw, v0.wwww, r0.xyzw
   4: mov o1.xyz, v0.xyzx
   5: ret
Interpreting local positions as colors.
将局部位置解释为颜色。

Using Structures  使用结构体

Do you think that the parameter lists of our programs look messy? It will only get worse as we pass more and more data between them. As the vertex output should match the fragment input, it would be convenient if we could define the parameter list in one place. Fortunately, we can do so.
你是否觉得程序的参数列表看起来很混乱?随着我们在它们之间传递的数据越来越多,情况只会变得更糟。由于顶点输出应该与片段输入匹配,如果能在一个地方定义参数列表会很方便。幸运的是,我们可以做到这一点。

We can define data structures, which are simply a collection of variables. They are akin to structs in C#, except that the syntax is a little different. Here is a struct that defines the data that we're interpolating. Note the usage of a semicolon after its definition.
我们可以定义数据结构,它们只是一组变量的集合。它们类似于 C# 中的结构体,只是语法略有不同。这是一个定义我们正在插值的数据的结构体。请注意其定义后的分号用法。

			struct Interpolators {
				float4 position : SV_POSITION;
				float3 localPosition : TEXCOORD0;
			};

Using this structure makes our code a lot tidier.
使用这种结构体使我们的代码整洁了许多。

			float4 _Tint;
			
			struct Interpolators {
				float4 position : SV_POSITION;
				float3 localPosition : TEXCOORD0;
			};

			Interpolators MyVertexProgram (float4 position : POSITION) {
				Interpolators i;
				i.localPosition = position.xyz;
				i.position = mul(UNITY_MATRIX_MVP, position);
				return i;
			}

			float4 MyFragmentProgram (Interpolators i) : SV_TARGET {
				return float4(i.localPosition, 1);
			}

Tweaking Colors  调整颜色

Because negative colors get clamped to zero, our sphere ends up rather dark. As the default sphere has an object-space radius of ½, the color channels end up somewhere between −½ and ½. We want to move them into the 0–1 range, which we can do by adding ½ to all channels.
由于负色被限制为零,我们的球体最终会相当暗。由于默认球体在对象空间中的半径为½,颜色通道最终会介于−½和½之间。我们希望将它们移到 0–1 范围,这可以通过将½添加到所有通道来完成。

				return float4(i.localPosition + 0.5, 1);
Local position recolored.
局部位置重新着色。

We can also apply our tint by factoring it into the result.
我们也可以通过将其纳入结果来应用我们的色调。

				return float4(i.localPosition + 0.5, 1) * _Tint;
uniform 	vec4 _Tint;
in  vec3 vs_TEXCOORD0;
layout(location = 0) out vec4 SV_TARGET0;
vec4 t0;
void main()
{
    t0.xyz = vs_TEXCOORD0.xyz + vec3(0.5, 0.5, 0.5);
    t0.w = 1.0;
    SV_TARGET0 = t0 * _Tint;
    return;
}
ConstBuffer "$Globals" 128
Vector 96 [_Tint]
BindCB  "$Globals" 0
      ps_4_0
      dcl_constantbuffer cb0[7], immediateIndexed
      dcl_input_ps linear v0.xyz
      dcl_output o0.xyzw
      dcl_temps 1
   0: add r0.xyz, v0.xyzx, l(0.500000, 0.500000, 0.500000, 0.000000)
   1: mov r0.w, l(1.000000)
   2: mul o0.xyzw, r0.xyzw, cb0[6].xyzw
   3: ret
Local position with a red tint, so only X remains.
局部位置带有红色,所以只剩下 X。
unitypackage

Texturing  纹理

If you want to add more apparent details and variety to a mesh, without adding more triangles, you can use a texture. You're then projecting an image onto the mesh triangles.
如果你想在不增加三角形数量的情况下,为网格添加更多明显的细节和变化,你可以使用纹理。这时你实际上是将一张图片投射到网格的三角形上。

Texture coordinates are used to control the projection. These are 2D coordinate pairs that cover the entire image in a one-unit square area, regardless of the actual aspect ratio of the texture. The horizontal coordinate is known as U and the vertical coordinate as V. Hence, they're usually referred to as UV coordinates.
纹理坐标用于控制投射。这些是二维坐标对,它们覆盖了整个图像,无论纹理的实际长宽比如何,都位于一个单位正方形区域内。水平坐标称为 U,垂直坐标称为 V。因此,它们通常被称为 UV 坐标。

UV coordinates covering an image.
覆盖图像的 UV 坐标。

The U coordinate increases from left to right. So it is 0 at the left side of the image, ½ halfway, and 1 at the right side. The V coordinate works the same way, vertically. It increases from bottom to top, except for Direct3D, where it goes from top to bottom. You almost never need to worry about this difference.
U 坐标从左到右递增。因此,在图像左侧时它是 0,中途是½,右侧是 1。V 坐标以同样的方式垂直工作。它从下到上递增,除了 Direct3D,在 Direct3D 中它是从上到下。你几乎不需要担心这种差异。

Using UV Coordinates  使用 UV 坐标

Unity's default meshes have UV coordinates suitable for texture mapping. The vertex program can access them via a parameter with the TEXCOORD0 semantic.
Unity 的默认网格具有适合纹理映射的 UV 坐标。顶点程序可以通过带有 TEXCOORD0 语义的参数访问它们。

			Interpolators MyVertexProgram (
				float4 position : POSITION,
				float2 uv : TEXCOORD0
			) {
				Interpolators i;
				i.localPosition = position.xyz;
				i.position = mul(UNITY_MATRIX_MVP, position);
				return i;
			}

Our vertex program now uses more than one input parameter. Once again, we can use a struct to group them.
我们的顶点程序现在使用了不止一个输入参数。再一次,我们可以使用结构体来将它们分组。

			struct VertexData {
				float4 position : POSITION;
				float2 uv : TEXCOORD0;
			};
			
			Interpolators MyVertexProgram (VertexData v) {
				Interpolators i;
				i.localPosition = v.position.xyz;
				i.position = mul(UNITY_MATRIX_MVP, v.position);
				return i;
			}
			

Let's just pass the UV coordinates straight to the fragment program, replacing the local position.
我们直接将 UV 坐标传递给片段程序,取代局部位置。

			struct Interpolators {
				float4 position : SV_POSITION;
				float2 uv : TEXCOORD0;
//				float3 localPosition : TEXCOORD0;
			};

			Interpolators MyVertexProgram (VertedData v) {
				Interpolators i;
//				i.localPosition = v.position.xyz;
				i.position = mul(UNITY_MATRIX_MVP, v.position);
				i.uv = v.uv;
				return i;
			}

We can make the UV coordinates visible, just like the local position, by interpreting them as color channels. For example, U becomes red, V becomes green, while blue is always 1.
我们可以通过将 UV 坐标解释为颜色通道,使其像局部位置一样可见。例如,U 变为红色,V 变为绿色,而蓝色始终为 1。

			float4 MyFragmentProgram (Interpolators i) : SV_TARGET {
				return float4(i.uv, 1, 1);
			}

You'll see that the compiled vertex programs now copy the UV coordinates from the vertex data to the interpolator output.
您会看到,编译后的顶点程序现在将 UV 坐标从顶点数据复制到插值器输出。

in  vec4 in_POSITION0;
in  vec2 in_TEXCOORD0;
out vec2 vs_TEXCOORD0;
vec4 t0;
void main()
{
    t0 = in_POSITION0.yyyy * glstate_matrix_mvp[1];
    t0 = glstate_matrix_mvp[0] * in_POSITION0.xxxx + t0;
    t0 = glstate_matrix_mvp[2] * in_POSITION0.zzzz + t0;
    gl_Position = glstate_matrix_mvp[3] * in_POSITION0.wwww + t0;
    vs_TEXCOORD0.xy = in_TEXCOORD0.xy;
    return;
}
Bind "vertex" Vertex
Bind "texcoord" TexCoord0
ConstBuffer "UnityPerDraw" 352
Matrix 0 [glstate_matrix_mvp]
BindCB  "UnityPerDraw" 0
      vs_4_0
      dcl_constantbuffer cb0[4], immediateIndexed
      dcl_input v0.xyzw
      dcl_input v1.xy
      dcl_output_siv o0.xyzw, position
      dcl_output o1.xy
      dcl_temps 1
   0: mul r0.xyzw, v0.yyyy, cb0[1].xyzw
   1: mad r0.xyzw, cb0[0].xyzw, v0.xxxx, r0.xyzw
   2: mad r0.xyzw, cb0[2].xyzw, v0.zzzz, r0.xyzw
   3: mad o0.xyzw, cb0[3].xyzw, v0.wwww, r0.xyzw
   4: mov o1.xy, v1.xyxx
   5: ret

Unity wraps the UV coordinates around its sphere, collapsing the top and bottom of the image at the poles. You'll see a seam run from the north to the south pole where the left and right sides of the image are joined. So along that seam you'll have U coordinate values of both 0 and 1. This is done by having duplicate vertices along the seam, being identical except for their U coordinates.
Unity 将 UV 坐标缠绕在其球体上,导致图像的顶部和底部在两极处塌陷。你会看到一条缝隙从北极延伸到南极,这是图像左右两边连接的地方。因此,沿着这条缝隙,U 坐标值既有 0 也有 1。这是通过在缝隙处设置重复的顶点来实现的,这些顶点除了 U 坐标外都相同。

frontal from above
UV as colors, head-on and from above.
UV 作为颜色,正面和俯视。

Adding a Texture  添加纹理

To add a texture, you need to import an image file. Here is the one I'll use for testing purposes.
若要添加纹理,需要导入图像文件。这就是我将用于测试目的的文件。

Texture for testing.  用于测试的纹理。

You can add an image to your project by dragging it onto the project view. You could also do it via the Asset / Import New Asset... menu item. The image will be imported as a 2D texture with the default settings, which are fine.
你可以通过将图像拖动到项目视图来将其添加到项目中。你也可以通过“资产/导入新资产...”菜单项来完成此操作。图像将以默认设置作为 2D 纹理导入,这些设置是合适的。

settings preview
Imported texture with default settings.
以默认设置导入纹理。

To use the texture, we have to add another shader property. The type of a regular texture property is 2D, as there are also other types of textures. The default value is a string referring one of Unity's default textures, either white, black, or gray.
为了使用纹理,我们必须添加另一个着色器属性。常规纹理属性的类型是 2D,因为还有其他类型的纹理。默认值是一个字符串,引用 Unity 的默认纹理之一,可以是白色、黑色或灰色。

The convention is to name the main texture _MainTex, so we'll use that. This also enables you to use the convenient Material.mainTexture property to access it via a script, in case you need to.
惯例是将主纹理命名为 _MainTex ,所以我们也这样做。如果你需要,这也能让你使用方便的 Material.mainTexture 属性通过脚本访问它。

	Properties {
		_Tint ("Tint", Color) = (1, 1, 1, 1)
		_MainTex ("Texture", 2D) = "white" {}
	}

Now we can assign the texture to our material, either by dragging or via the Select button.
现在,我们可以将纹理分配给我们的材质,既可以通过拖动,也可以通过“选择”按钮。

Texture assigned to our material.
分配到材质的纹理。

We can access the texture in our shader by using a variable with type sampler2D.
我们可以通过使用类型为 sampler2D 的变量,在着色器中访问纹理。

			float4 _Tint;
			sampler2D _MainTex;

Sampling the texture with the UV coordinates is done in the fragment program, by using the tex2D function.
在片元程序中,通过使用 tex2D 函数,利用 UV 坐标对纹理进行采样。

			float4 MyFragmentProgram (Interpolators i) : SV_TARGET {
				return tex2D(_MainTex, i.uv);
			}
uniform  sampler2D _MainTex;
in  vec2 vs_TEXCOORD0;
layout(location = 0) out vec4 SV_TARGET0;
void main()
{
    SV_TARGET0 = texture(_MainTex, vs_TEXCOORD0.xy);
    return;
}
SetTexture 0 [_MainTex] 2D 0
      ps_4_0
      dcl_sampler s0, mode_default
      dcl_resource_texture2d (float,float,float,float) t0
      dcl_input_ps linear v0.xy
      dcl_output o0.xyzw
   0: sample o0.xyzw, v0.xyxx, t0.xyzw, s0
   1: ret
frontal from above
Textured sphere.  带纹理的球体。

Now that the texture is sampled for each fragment, it will appear projected on the sphere. It is wrapped around it, as expected, but it will appear quite wobbly near the poles. Why is this so?
现在,每个片段都对纹理进行了采样,它将显示在球体上。正如预期的那样,它包裹在球体周围,但在两极附近会显得非常不稳定。这是为什么呢?

The texture distortion happens because interpolation is linear across triangles. Unity's sphere only has a few triangles near the poles, where the UV coordinates are distorted most. So UV coordinates change nonlinearly from vertex to vertex, but in between vertices their change is linear. As a result, straight lines in the texture suddenly change direction at triangle boundaries.
纹理失真发生的原因是插值在三角形上是线性的。Unity 的球体在两极附近只有几个三角形,而那里的 UV 坐标变形最严重。因此,UV 坐标从一个顶点到另一个顶点是非线性变化的,但在顶点之间它们的变化是线性的。结果,纹理中的直线在三角形边界处突然改变方向。

Linear interpolation across triangles.
三角形上的线性插值。

Different meshes have different UV coordinates, which produces different mappings. Unity's default sphere uses longitude-latitude texture mapping, while the mesh is a low-resolution cube sphere. It's sufficient for testing, but you're better off using a custom sphere mesh for better results.
不同的网格有不同的 UV 坐标,这会产生不同的映射。Unity 的默认球体使用经纬度纹理映射,而其网格是一个低分辨率的立方体球。虽然它足以用于测试,但为了更好的效果,你最好使用自定义的球体网格。

Different texture preview shapes.
不同的纹理预览形状。

Finally, we can factor in the tint to adjust the textured appearance of the sphere.
最后,我们可以将色彩考虑在内,以调整球体的纹理外观。

				return tex2D(_MainTex, i.uv) * _Tint;
Textured with yellow tint.
刷上黄色颜料。

Tiling and Offset  平铺与偏移

After we added a texture property to our shader, the material inspector didn't just add a texture field. It also added tiling and offset controls. However, changing these 2D vectors currently has no effect.
为着色器添加纹理属性后,材质检查器不只添加了纹理字段,还添加了平铺与偏移控件。但目前更改这两个二维向量没有任何效果。

This extra texture data is stored in the material and can also be accessed by the shader. You do so via a variable that has the same name as the associated material, plus the _ST suffix. The type of this variable must be float4.
这些额外纹理数据存储在材质中,也可供着色器访问。你可以通过一个与关联材质同名,并附加 `_ST` 后缀的变量来访问。此变量的类型必须是 float4

			sampler2D _MainTex;
			float4 _MainTex_ST;

The tiling vector is used to scale the texture, so it is (1, 1) by default. It is stored in the XY portion of the variable. To use it, simply multiply it with the UV coordinates. This can be done either in the vertex shader or the fragment shader. It makes sense to do it in the vertex shader, so we perform the multiplications only for each vertex instead of for every fragment.
平铺矢量用于缩放纹理,因此默认值为 (1, 1)。它存储在变量的 XY 部分。要使用它,只需将其与 UV 坐标相乘即可。这可以在顶点着色器或片段着色器中完成。在顶点着色器中执行此操作是合理的,因此我们只为每个顶点而不是每个片段执行乘法。

			Interpolators MyVertexProgram (VertexData v) {
				Interpolators i;
				i.position = mul(UNITY_MATRIX_MVP, v.position);
				i.uv = v.uv * _MainTex_ST.xy;
				return i;
			}
Tiling.  切片。

The offset portion moves the texture around and is stored in the ZW portion of the variable. It is added to the UV after scaling.
偏移部分移动纹理,并存储在变量的 ZW 部分。它在缩放后添加到 UV。

				i.uv = v.uv * _MainTex_ST.xy + _MainTex_ST.zw;
Offset.  偏移。

UnityCG.cginc contains a handy macro that simplifies this boilerplate for us. We can use it as a convenient shorthand.
UnityCG.cginc 中含有一个便捷的宏,可以简化这种样板文件。我们可以用它作为一种方便的速记。

				i.uv = TRANSFORM_TEX(v.uv, _MainTex);
unitypackage

Texture Settings  纹理设置

So far we've used the default texture import settings. Let's have a look at a few of the options, to see what they do.
到目前为止,我们一直使用默认的纹理导入设置。让我们看看其中一些选项,了解它们的作用。

Default import settings.
默认导入设置。

The Wrap Mode dictates what happens when sampling with UV coordinates that lie outside of the 0–1 range. When the wrap mode is set to clamped, the UV are constrained to remain inside the 0–1 range. This means that the pixels beyond the edge are the same as those that lie on the edge. When the wrap mode is set to repeat, the UV wrap around. This means that the pixels beyond the edge are the same as those on the opposite side of the texture. The default mode is to repeat the texture, which causes it to tile.
环绕模式决定了当使用超出 0-1 范围的 UV 坐标进行采样时会发生什么。当环绕模式设置为“ clamped”(钳制)时,UV 会被限制在 0-1 范围内。这意味着超出边缘的像素与位于边缘的像素相同。当环绕模式设置为“repeat”(重复)时,UV 会环绕。这意味着超出边缘的像素与纹理另一侧的像素相同。默认模式是重复纹理,这会导致它平铺。

If you don't have a tiling texture, you'd want to clamp the UV coordinates instead. This prevents the texture from repeating, instead the texture boundary will be replicated, causing it to look stretched.
如果您没有平铺纹理,您会希望钳制 UV 坐标。这可以防止纹理重复,而是会复制纹理边界,导致其看起来被拉伸。

Tiling at (2, 2) while clamped.
钳制模式下,平铺(2, 2)。

Mipmaps and Filtering  Mipmap 和过滤

What happens when the pixels of a texture – texels – don't exactly match the pixels they are projected onto? There is a mismatch, which has to be resolved somehow. How this is done is controlled by the Filter Mode.
当纹理像素(即纹素)与它们所投影到的像素不完全匹配时,会发生什么?存在一个不匹配,必须以某种方式解决。如何解决这个问题由过滤模式控制。

The most straightforward filtering mode is Point (no filter). This means that when a texture is sampled at some UV coordinates, the nearest texel is used. This will give the texture a blocky appearance, unless texels map exactly to display pixels. So it is typically used for pixel-perfect rendering, or when a blocky style is desired.
最直接的过滤模式是点(无过滤)。这意味着当纹理在某些 UV 坐标处采样时,将使用最近的纹素。这将使纹理呈现块状外观,除非纹素精确映射到显示像素。因此,它通常用于像素完美渲染,或在需要块状风格时使用。

The default is to use bilinear filtering. When a texture is sampled somewhere in between two texels, those two texels are interpolated. As textures are 2D, this happens both along the U and the V axis. Hence bilinear filtering, not just linear filtering.
默认使用双线性过滤。当纹理在两个纹素之间的某个位置进行采样时,这两个纹素会被插值。由于纹理是二维的,这在 U 轴和 V 轴上都会发生。因此是双线性过滤,而不仅仅是线性过滤。

This approach works when the texel density is less than the display pixel density, so when you're zooming in to the texture. The result will look blurry. It doesn't work in the opposite case, when you're zooming out of the texture. Adjacent display pixels will end up with samples that are more than one texel apart. This means that parts of the texture will be skipped, which will cause harsh transitions, as if the image was sharpened.
当纹素密度小于显示像素密度时,即当你放大纹理时,这种方法是有效的。结果会看起来模糊。但在相反的情况下,当你缩小纹理时,它就不起作用了。相邻的显示像素最终会得到相距超过一个纹素的样本。这意味着纹理的部分内容将被跳过,这将导致生硬的过渡,就好像图像被锐化了一样。

The solution to this problem is to use a smaller texture whenever the texel density becomes too high. The smaller the texture appears on the display, the smaller a version of it should be used. These smaller versions are known as mipmaps and are automatically generated for you. Each successive mipmap has half the width and height of the previous level. So when the original texture size is 512x512, the mip maps are 256x256, 128x128, 64x64, 32x32, 16x16, 8x8, 4x4, and 2x2.
解决此问题的方法是,当纹素密度过高时,使用较小的纹理。纹理在显示屏上显得越小,就应使用其越小的版本。这些较小的版本被称为 Mipmap,它们是自动为您生成的。每个连续的 Mipmap 的宽度和高度都是上一级别的一半。因此,当原始纹理大小为 512x512 时,Mipmap 分别为 256x256、128x128、64x64、32x32、16x16、8x8、4x4 和 2x2。

Mipmap levels.  Mipmap 层级。

You can disable mipmaps if you like. First, you have the set the Texture Type type to Advanced. They you can disable the mipmaps and apply the change. A good way to see the difference is to use a flat object like a quad and look at it from an angle.
如果你愿意,可以禁用 mipmap。首先,你需要将纹理类型(Texture Type)设置为“高级”(Advanced)。然后,你就可以禁用 mipmap 并应用更改。观察差异的一个好方法是使用一个扁平物体,例如四边形,并从某个角度观察它。

with without
With and without mipmaps.
使用和不使用 mipmap。

So which mipmap level is used where, and how different do they look? We can make the transitions visible by enabling Fadeout Mip Maps in the advanced texture settings. When enabled, a Fade Range slider will show up in the inspector. It defines a mipmap range across which the mipmaps will transition to solid gray. By making this transition a single step, you will get a sharp transition to gray. The further you move the one-step range to the right, the later the transition will occur.
那么,哪个 mipmap 级别在哪里使用,它们看起来有多么不同?我们可以通过在高级纹理设置中启用“渐隐 mip 贴图”来使过渡可见。启用后,检视器中将显示一个“渐隐范围”滑块。它定义了一个 mipmap 范围,在此范围内 mipmap 将过渡到纯灰色。通过使此过渡成为一个单步,您将获得到灰色的清晰过渡。您将单步范围向右移动得越远,过渡发生得越晚。

Advanced settings for mipmaps.
Mipmap 的高级设置。

To get a good view of this effect, set the texture's Aniso Level to 0 for now.
要更好地观察这种效果,请暂时将纹理的各向异性级别设置为 0。

mip 3 mip 4 mip 5
Successive mipmap levels.
连续的 mipmap 层级。

Once you know where the various mipmaps levels are, you should be able to see the sudden change in texture quality between them. As the texture projection gets smaller, the texel density increases, which makes it look sharper. Until suddenly the next mipmap level kicks in, and it is becomes blurry again.
一旦你知道各种 mipmap 层级的位置,你就应该能看到它们之间纹理质量的突然变化。随着纹理投影变得更小,纹素密度增加,这使得它看起来更锐利。直到下一个 mipmap 层级突然生效,它又变得模糊了。

So without mipmaps you go from blurry to sharp, to too sharp. With mipmaps you go from blurry to sharp, to suddenly blurry again, to sharp, to suddenly blurry again, and so on.
所以,如果没有 mipmap,你就会从模糊到锐利,再到过于锐利。有了 mipmap,你就会从模糊到锐利,再突然变得模糊,再到锐利,再突然变得模糊,如此循环。

Those blurry-sharp bands are characteristic for bilinear filtering. You can get rid of them by switching the filter mode to Trilinear. This works the same as bilinear filtering, but it also interpolates between adjacent mipmap levels. Hence trilinear. This makes sampling more expensive, but it smoothes the transitions between mipmap levels.
这些模糊-清晰的条纹是双线性滤波的特征。你可以通过将滤波模式切换到三线性滤波来消除它们。三线性滤波的工作方式与双线性滤波相同,但它也会在相邻的 mipmap 级别之间进行插值。因此得名三线性。这使得采样成本更高,但它能平滑 mipmap 级别之间的过渡。

Trilinear filtering between normal and gray mipmaps.
在普通和灰度 Mipmap 之间进行三线性滤波。

Another useful technique is anisotropic filtering. You might have noticed that when you set it to 0, the texture became blurrier. This has to do with the selection of the mipmap level.
另一个有用的技术是各向异性过滤。你可能已经注意到,当你将其设置为 0 时,纹理变得更模糊了。这与 Mipmap 级别的选择有关。

When a texture gets projected at an angle, due to perspective, you often end up with one of its dimension being distorted much more than the other. A good example is a textured ground plane. At a distance, the forward-backward dimension of the texture will appear much smaller that the left-right dimension.
当纹理以一定角度投影时,由于透视,其一个维度经常会比另一个维度扭曲更多。一个很好的例子是带纹理的地面。在远处,纹理的前后维度会显得比左右维度小得多。

Which mipmap level get selected is based on the worst dimension. If the difference is large, then you will get a result that is very blurry in one dimension. Anisotropic filtering mitigates this by decoupling the dimensions. Besides uniformly scaling down the texture, it also provides versions that are scaled different amounts in either dimension. So you don't just have a mipmap for 256x256, but also for 256x128, 256x64, and so on.
选择哪个 Mipmap 级别取决于最差的维度。如果差异很大,那么在一个维度上会得到非常模糊的结果。各向异性过滤通过解耦维度来缓解这个问题。除了均匀缩小纹理外,它还提供了在任一维度上按不同比例缩放的版本。因此,您不仅有 256x256 的 Mipmap,还有 256x128、256x64 等等。

without with
Without and with anisotropic filtering.
未启用和启用各向异性过滤的效果。

Note that those extra mipmaps aren't pre-generated like the regular mipmaps. Instead, they are simulated by performing extra texture samples. So they don't require more space, but are more expensive to sample.
请注意,这些额外的 mipmap 并不会像常规 mipmap 那样预生成。相反,它们是通过执行额外的纹理采样来模拟的。因此它们不需要更多的空间,但采样成本更高。

Anisotropic bilinear filtering, transitioning to gray.
各向异性双线性过滤,逐渐变为灰色。

How deep the anisotropic filtering goes is controlled by Aniso Level. At 0, it is disabled. At 1, it becomes enabled and provides the minimum effect. At 16, it is at its maximum. However, these settings are influence by the project's quality settings.
各向异性过滤的深度由“各向异性级别”控制。当为 0 时,禁用它。当为 1 时,启用它并提供最小效果。当为 16 时,达到最大值。然而,这些设置受项目质量设置的影响。

You can access the quality settings via Edit / Project Settings / Quality. You will find a Anisotropic Textures setting in the Rendering section.
您可以通过“编辑”/“项目设置”/“质量”访问质量设置。您将在“渲染”部分找到“各向异性纹理”设置。

Rendering quality settings.
渲染质量设置。

When anisotropic textures are disabled, no anisotropic filtering will happen, regardless of a texture's settings. When it is set to Per Texture, it is fully controlled by each individual texture. It can also be set to Forced On, which will act as if each texture's Aniso Level is set to at least 9. However, a texture with an Aniso Level set to 0 still won't use anisotropic filtering.
当各向异性纹理被禁用时,无论纹理设置如何,都不会发生各向异性过滤。当它设置为“按纹理”时,它完全由每个单独的纹理控制。它也可以设置为“强制开启”,这会使每个纹理的各向异性级别至少为 9。然而,各向异性级别设置为 0 的纹理仍然不会使用各向异性过滤。

The next tutorial is Combining Textures.
下一个教程是“组合纹理”。

unitypackage PDF
switch theme  切换主题
contents  内容
  1. Default Scene
    1. Stripping It Down
  2. From Object to Image
    1. Your First Shader
    2. Shader Programs
    3. Shader Compilation
    4. Including Other Files
    5. Producing Output
    6. Transforming Vertices
  3. Coloring Pixels
    1. Shader Properties
    2. Accessing Properties
    3. From Vertex To Fragment
    4. Using Structures
    5. Tweaking Colors
  4. Texturing
    1. Using UV Coordinates
    2. Adding a Texture
    3. Tiling and Offset
  5. Texture Settings
    1. Mipmaps and Filtering