Rendering 3
Combining Textures
渲染 3 纹理组合
- Sample multiple textures.
对多个纹理进行采样。 - Apply a detail texture.
应用细节纹理。 - Deal with colors in linear space.
在线性空间中处理颜色。 - Use a splat map.
使用泼溅贴图。
This is the third part of a tutorial series about rendering. The previous part introduced shaders and textures. We've seen how you can use a single texture to make a flat surface appear more complex. Now we go beyond that and use multiple textures at the same time.
这是关于渲染的系列教程的第三部分。上一部分介绍了着色器和纹理。我们已经看到如何使用单个纹理使平面看起来更复杂。现在我们在此基础上更进一步,同时使用多个纹理。
This tutorials was made using Unity 5.4.0b15.
本教程基于 Unity 5.4.0b15 编写。

结合多种纹理
Detail Textures 细节纹理
Textures are nice, but they have limitations. They have a fixed amount of texels, no matter at what size they are displayed. If they are rendered small, we can use mipmaps to keep them looking good. But when they are rendered large, they become blurry. We cannot invent extra details out of nothing, so there's no way around that. Or is there?
纹理很好,但它们有局限性。无论以何种尺寸显示,它们的纹素数量是固定的。如果它们渲染得很小,我们可以使用 mipmap 来保持它们的良好外观。但当它们渲染得很大时,它们会变得模糊。我们无法无中生有地创造额外的细节,所以没有办法解决这个问题。或者有吗?
Of course, we could use a larger texture. More texels means more details. But there's a limit to a texture's size. And it is kind of wasteful to store a lot of extra data that is only noticeable up close.
当然,我们可以使用更大的纹理。更多的纹素意味着更多的细节。但是纹理的大小是有限制的。而且存储大量只有近距离才能察觉的额外数据有点浪费。
Another way to increase the texel density is to tile a texture. Then you can get as small as you want, but you will obviously get a repeating pattern. This might not be noticeable up close, though. After all, when you're standing with your nose touching a wall, you'll only see a very small section of the entire wall.
增加像素密度(texel density)的另一种方法是平铺纹理。这样你可以获得任意小的纹理,但显然会出现重复模式。不过,这在近距离可能不会太明显。毕竟,当你鼻子贴着墙站着时,你只会看到整面墙的一个很小部分。
So we should be able to add details by combining an untiled texture with a tiled texture. To try this out, let's use a texture with an obvious pattern. Here's a checkered grid. Grab it and put it to your project, with the default import settings. I perturbed the grid lines a bit, to make it more interesting and to make it possible to perceive its tiling.
因此,我们应该能够通过结合未平铺纹理和平铺纹理来添加细节。为了尝试这一点,让我们使用一个具有明显图案的纹理。这是一个棋盘格。获取它并将其添加到你的项目中,使用默认导入设置。我稍微扰动了网格线,使其更有趣,并使其能够感知到它的平铺。

略微扭曲的网格纹理。
Duplicate My First Shader and rename it to Textured With Detail. We'll use this new shader from now on.
复制 My First Shader 并将其重命名为 Textured With Detail 。从现在开始,我们将使用这个新着色器。
Shader "Custom/Textured With Detail" { Properties { _Tint ("Tint", Color) = (1, 1, 1, 1) _MainTex ("Texture", 2D) = "white" {} } SubShader { … } }
Create a new material with this shader, then assign the grid texture to it.
使用此着色器创建新材质,然后将网格纹理赋给它。


带有网格的详细材质。
Assign the material to a quad and have a look at it. From a distance, it will look fine. But get too close, and it will become blurry and fuzzy. Besides the lack of details, artifacts caused by texture compression will also become obvious.
将材质赋给一个四边形并查看它。从远处看,它会很好看。但如果靠得太近,它就会变得模糊不清。除了缺乏细节外,纹理压缩引起的伪影也会变得明显。

网格特写,显示低纹素密度和 DXT1 伪影。
Multiple Texture Samples
多重纹理采样
Right now we're taking a single texture sample and using that as the result of our fragment shader. As we're going to change that approach, it's handy to store the sampled color in a temporary variable.
目前我们只进行了单次纹理采样,并将其作为片段着色器的结果。我们将改变这种方法,因此最好将采样颜色存储在一个临时变量中。
float4 MyFragmentProgram (Interpolators i) : SV_TARGET { float4 color = tex2D(_MainTex, i.uv) * _Tint; return color; }
We reasoned that we can increase the texel density by introducing a tiled texture. Let's simply perform a second texture sample which tiles ten times as much as the original sample. Actually replace the original color, don't add to it just yet.
我们推断可以通过引入平铺纹理来增加纹素密度。我们只需进行第二次纹理采样,其平铺密度是原始采样的十倍。实际上是替换原始颜色,而不是现在就添加上去。
float4 color = tex2D(_MainTex, i.uv) * _Tint;
color = tex2D(_MainTex, i.uv * 10);
return color;
This produces a much smaller grid. You can get much closes to that before it starts to look bad. Of course, because the grid is irregular, it is obviously a repeating pattern.
这会产生一个更小的网格。在它开始看起来糟糕之前,你可以离它更近。当然,由于网格不规则,它显然是一个重复的图案。

Notice that at this point we're performing two texture samples, but end up using only one of them. This seems wasteful. Is it? Take a look at the compiled vertex programs. Just like in the previous tutorial, I'll include the relevant compiled code for OpenGLCore and Direct3D 11.
请注意,此时我们进行了两次纹理采样,但最终只使用其中一次。这似乎是浪费。是这样吗?看看编译后的顶点程序。就像之前的教程一样,我将包括 OpenGLCore 和 Direct3D 11 的相关编译代码。
uniform sampler2D _MainTex; in vec2 vs_TEXCOORD0; layout(location = 0) out vec4 SV_TARGET0; vec2 t0; void main() { t0.xy = vs_TEXCOORD0.xy * vec2(10.0, 10.0); SV_TARGET0 = texture(_MainTex, t0.xy); return; }
SetTexture 0 [_MainTex] 2D 0 ps_4_0 dcl_sampler s0, mode_default dcl_resource_texture2d (float,float,float,float) t0 dcl_input_ps linear v0.xy dcl_output o0.xyzw dcl_temps 1 0: mul r0.xy, v0.xyxx, l(10.000000, 10.000000, 0.000000, 0.000000) 1: sample o0.xyzw, r0.xyxx, t0.xyzw, s0 2: ret
Did you notice that there is only one texture sample in the compiled code? That's right, the compiler removed the unnecessary code for us! Basically, it works its way back from the end result, and discards anything that ends up unused.
你是否注意到,编译后的代码中只有一个纹理采样?没错,编译器为我们移除了不必要的代码!基本上,它是从最终结果回溯,并舍弃所有最终未被使用的内容。
Of course, we don't want to replace the original sample. We want to combine both samples. Let's do so by multiplying them together. But once again, let's add a twist. Sample the texture twice, with the exact same UV coordinates.
我们当然不希望替换原有的采样结果。我们希望将两者结合起来,通过将它们相乘来实现。但这一次,我们再加一个变化:使用完全相同的 UV 坐标对纹理进行两次采样。
float4 color = tex2D(_MainTex, i.uv) * _Tint; color *= tex2D(_MainTex, i.uv); return color;
What does the shader compiler make of that?
着色器编译器对此有何看法?
uniform sampler2D _MainTex;
in vec2 vs_TEXCOORD0;
layout(location = 0) out vec4 SV_TARGET0;
mediump vec4 t16_0;
lowp vec4 t10_0;
void main()
{
t10_0 = texture(_MainTex, vs_TEXCOORD0.xy);
t16_0 = t10_0 * t10_0;
SV_TARGET0 = t16_0 * _Tint;
return;
}
SetTexture 0 [_MainTex] 2D 0 ConstBuffer "$Globals" 144 Vector 96 [_Tint] BindCB "$Globals" 0 ps_4_0 dcl_constantbuffer cb0[7], immediateIndexed dcl_sampler s0, mode_default dcl_resource_texture2d (float,float,float,float) t0 dcl_input_ps linear v0.xy dcl_output o0.xyzw dcl_temps 1 0: sample r0.xyzw, v0.xyxx, t0.xyzw, s0 1: mul r0.xyzw, r0.xyzw, r0.xyzw 2: mul o0.xyzw, r0.xyzw, cb0[6].xyzw 3: ret
Once again, we end up with a single texture sample. The compiler detected the duplicate code and optimized it. So the texture is only sampled once. The result is stored in a register and reused. The compiler is smart enough to detect such code duplications, even when you use intermediary variables and such. It traces everything back to its original input. It then reorganizes everything as efficiently as possible.
我们又回到了单个纹理采样。编译器检测到重复的代码并对其进行了优化。因此纹理只采样一次。结果存储在一个寄存器中并被重用。编译器足够智能,可以检测到此类代码重复,即使您使用了中间变量等。它将所有内容追溯到其原始输入。然后它会尽可能高效地重新组织所有内容。
Now put back the ×10 UV coordinates for the second sample. We'll finally see the large and small grids combined.
现在我们把第二个样本的 ×10 UV 坐标放回去。最终我们将看到大网格和小网格结合在一起。
color *= tex2D(_MainTex, i.uv * 10);

组合两种不同的平铺方式。
As the texture samples are no longer the same, the compiler will have to use two of them as well.
由于纹理采样不再相同,编译器也将不得不使用两个纹理采样。
uniform sampler2D _MainTex; in vec2 vs_TEXCOORD0; layout(location = 0) out vec4 SV_TARGET0; vec4 t0; lowp vec4 t10_0; vec2 t1; lowp vec4 t10_1; void main() { t10_0 = texture(_MainTex, vs_TEXCOORD0.xy); t0 = t10_0 * _Tint; t1.xy = vs_TEXCOORD0.xy * vec2(10.0, 10.0); t10_1 = texture(_MainTex, t1.xy); SV_TARGET0 = t0 * t10_1; return; }
SetTexture 0 [_MainTex] 2D 0 ConstBuffer "$Globals" 144 Vector 96 [_Tint] BindCB "$Globals" 0 ps_4_0 dcl_constantbuffer cb0[7], immediateIndexed dcl_sampler s0, mode_default dcl_resource_texture2d (float,float,float,float) t0 dcl_input_ps linear v0.xy dcl_output o0.xyzw dcl_temps 2 0: sample r0.xyzw, v0.xyxx, t0.xyzw, s0 1: mul r0.xyzw, r0.xyzw, cb0[6].xyzw 2: mul r1.xy, v0.xyxx, l(10.000000, 10.000000, 0.000000, 0.000000) 3: sample r1.xyzw, r1.xyxx, t0.xyzw, s0 4: mul o0.xyzw, r0.xyzw, r1.xyzw 5: ret
Separate Detail Texture 独立的细节纹理
When you multiply two textures together, the result will be darker. Unless at least one of the textures is white. That's because each color channel of a texel has a value between 0 and 1. When adding details to a texture, you might want to do so by darkening, but also by brightening.
当两个纹理相乘时,结果会变暗。除非其中至少一个纹理是白色的。这是因为纹素的每个颜色通道的值都在 0 到 1 之间。在纹理中添加细节时,你可能希望通过变暗来实现,但也可能希望通过增亮来实现。
To brighten the original texture, you need values that are greater than 1. Let's say up to 2, which would double the original color. This can be supported by doubling the detail sample before multiplying it with the original color.
要使原始纹理变亮,你需要大于 1 的值。假设最多为 2,这将使原始颜色加倍。这可以通过在将细节样本与原始颜色相乘之前将其加倍来支持。
color *= tex2D(_MainTex, i.uv * 10) * 2;

This approach requires that we reinterpret the texture used for the details. Multiplying by 1 does not change anything. But as we double the detail sample, this is now true for ½. This means that a solid gray – not white – texture will produce no change. All values below ½ will darken the result, while anything above ½ will brighten it.
这种方法要求我们重新解释用于细节的纹理。乘以 1 不会改变任何东西。但是,当我们加倍细节样本时,现在 ½ 才是真正的基准。这意味着一个纯灰色(而不是白色)的纹理将不会产生任何变化。所有低于 ½ 的值将使结果变暗,而所有高于 ½ 的值将使其变亮。
So we need a special detail texture, which is centered around gray. Here is such a texture for the grid.
所以我们需要一个特殊的细节纹理,其中心是灰色。下面是用于网格的这种纹理。

To use this separate detail texture, we have to add a second texture property to our shader. Use gray as its default, as that doesn't change the main texture's appearance.
要使用这个独立的细节纹理,我们必须在着色器中添加第二个纹理属性。将其默认值设为灰色,因为这不会改变主纹理的外观。
Properties { _Tint ("Tint", Color) = (1, 1, 1, 1) _MainTex ("Texture", 2D) = "white" {} _DetailTex ("Detail Texture", 2D) = "gray" {} }
Assign the detail texture to our material and set its tiling to 10.
将细节纹理指定给材质,并将其平铺设置为 10。

Of course we have to add variables to access the detail texture and its tiling and offset data.
当然,我们必须添加变量来访问细节纹理及其平铺和偏移数据。
sampler2D _MainTex, _DetailTex; float4 _MainTex_ST, _DetailTex_ST;
Using Two UV Pairs
使用两对 UV 坐标
Instead of using a hard-coded multiplication by 10, we should use the tiling and offset data of the detail texture. We can compute the final detail UV like the main UV, in the vertex program. This means that we need to interpolate an additional UV pair.
我们不应使用硬编码的乘以 10,而应使用细节纹理的平铺和偏移数据。我们可以在顶点程序中计算最终细节 UV,就像计算主 UV 一样。这意味着我们需要插值一个额外的 UV 对。
struct Interpolators { float4 position : SV_POSITION; float2 uv : TEXCOORD0; float2 uvDetail : TEXCOORD1; };
The new detail UV are created by transforming the original UV with the detail texture's tiling and offset.
新的细节 UV 是通过细节纹理的平铺和偏移来变换原始 UV 创建的。
Interpolators MyVertexProgram (VertexData v) { Interpolators i; i.position = mul(UNITY_MATRIX_MVP, v.position); i.uv = TRANSFORM_TEX(v.uv, _MainTex); i.uvDetail = TRANSFORM_TEX(v.uv, _DetailTex); return i; }
uniform vec4 _Tint; uniform vec4 _MainTex_ST; uniform vec4 _DetailTex_ST; in vec4 in_POSITION0; in vec2 in_TEXCOORD0; out vec2 vs_TEXCOORD0; out vec2 vs_TEXCOORD1; vec4 t0; void main() { t0 = in_POSITION0.yyyy * glstate_matrix_mvp[1]; t0 = glstate_matrix_mvp[0] * in_POSITION0.xxxx + t0; t0 = glstate_matrix_mvp[2] * in_POSITION0.zzzz + t0; gl_Position = glstate_matrix_mvp[3] * in_POSITION0.wwww + t0; vs_TEXCOORD0.xy = in_TEXCOORD0.xy * _MainTex_ST.xy + _MainTex_ST.zw; vs_TEXCOORD1.xy = in_TEXCOORD0.xy * _DetailTex_ST.xy + _DetailTex_ST.zw; return; }
Vector 112 [_MainTex_ST] Vector 128 [_DetailTex_ST] ConstBuffer "UnityPerDraw" 352 Matrix 0 [glstate_matrix_mvp] BindCB "$Globals" 0 BindCB "UnityPerDraw" 1 vs_4_0 dcl_constantbuffer cb0[9], immediateIndexed dcl_constantbuffer cb1[4], immediateIndexed dcl_input v0.xyzw dcl_input v1.xy dcl_output_siv o0.xyzw, position dcl_output o1.xy dcl_output o1.zw dcl_temps 1 0: mul r0.xyzw, v0.yyyy, cb1[1].xyzw 1: mad r0.xyzw, cb1[0].xyzw, v0.xxxx, r0.xyzw 2: mad r0.xyzw, cb1[2].xyzw, v0.zzzz, r0.xyzw 3: mad o0.xyzw, cb1[3].xyzw, v0.wwww, r0.xyzw 4: mad o1.xy, v1.xyxx, cb0[7].xyxx, cb0[7].zwzz 5: mad o1.zw, v1.xxxy, cb0[8].xxxy, cb0[8].zzzw 6: ret
Note how the two UV outputs are defined in both compiler vertex programs. OpenGLCore uses two outputs, vs_TEXCOORD0
and vs_TEXCOORD1
, as you would expect. In contrast, Direct3D 11 uses only a single output, o1
. How this works is explained by the output comment section that I usually omit from these code snippets.
请注意,两个 UV 输出在编译器顶点程序中是如何定义的。正如你所预料的,OpenGLCore 使用了两个输出, vs_TEXCOORD0
和 vs_TEXCOORD1
。相比之下,Direct3D 11 只使用一个输出, o1
。其工作原理在输出注释部分有解释,我通常会从这些代码片段中省略这部分内容。
// Output signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------- ------ // SV_POSITION 0 xyzw 0 POS float xyzw // TEXCOORD 0 xy 1 NONE float xy // TEXCOORD 1 zw 1 NONE float zw
What this means is that both UV pairs get packed into a single output register. The first ends up in the X and Y channels, and the second in the Z and W channels. This is possible because the registers are always groups of four numbers. The Direct3D 11 compiler took advantage of that.
这意味着两对 UV 都被打包到一个输出寄存器中。第一对进入 X 和 Y 通道,第二对进入 Z 和 W 通道。这之所以可能,是因为寄存器总是由四个数字组成。Direct3D 11 编译器利用了这一点。
Now we can use the extra UV pair in the fragment program.
现在我们可以在片元程序中使用额外的 UV 对。
float4 MyFragmentProgram (Interpolators i) : SV_TARGET {
float4 color = tex2D(_MainTex, i.uv) * _Tint;
color *= tex2D(_DetailTex, i.uvDetail) * 2;
return color;
}
uniform vec4 _Tint;
uniform vec4 _MainTex_ST;
uniform vec4 _DetailTex_ST;
uniform sampler2D _MainTex;
uniform sampler2D _DetailTex;
in vec2 vs_TEXCOORD0;
in vec2 vs_TEXCOORD1;
layout(location = 0) out vec4 SV_TARGET0;
vec4 t0;
lowp vec4 t10_0;
lowp vec4 t10_1;
void main()
{
t10_0 = texture(_MainTex, vs_TEXCOORD0.xy);
t0 = t10_0 * _Tint;
t10_1 = texture(_DetailTex, vs_TEXCOORD1.xy);
t0 = t0 * t10_1;
SV_TARGET0 = t0 + t0;
return;
}
SetTexture 0 [_MainTex] 2D 0 SetTexture 1 [_DetailTex] 2D 1 ConstBuffer "$Globals" 144 Vector 96 [_Tint] BindCB "$Globals" 0 ps_4_0 dcl_constantbuffer cb0[7], immediateIndexed dcl_sampler s0, mode_default dcl_sampler s1, mode_default dcl_resource_texture2d (float,float,float,float) t0 dcl_resource_texture2d (float,float,float,float) t1 dcl_input_ps linear v0.xy dcl_input_ps linear v0.zw dcl_output o0.xyzw dcl_temps 2 0: sample r0.xyzw, v0.xyxx, t0.xyzw, s0 1: mul r0.xyzw, r0.xyzw, cb0[6].xyzw 2: sample r1.xyzw, v0.zwzz, t1.xyzw, s1 3: mul r0.xyzw, r0.xyzw, r1.xyzw 4: add o0.xyzw, r0.xyzw, r0.xyzw 5: ret
Our shader is now fully functional. The main texture becomes both brighter and dimmer based on the detail texture.
我们的着色器现已完全可用。主纹理会根据细节纹理的变化而变亮或变暗。


Fading Details 细节渐隐
The idea of adding details was that they improve the material's appearance up close or zoomed in. They're not supposed to be visible far away or zoomed out, because that makes the tiling obvious. So we need a way to fade the details away as the display size of the texture decreases. We can do so by fading the detail texture to gray, as that results in no color change.
添加细节的想法是它们能改善材质在近距离或放大时的外观。它们不应该在远距离或缩小时可见,因为那会使平铺变得明显。所以我们需要一种方法,随着纹理显示尺寸的减小,使细节渐隐。我们可以通过将细节纹理渐变为灰色来实现,因为那样不会产生颜色变化。
We have done this before! All we need to do is enable Fadeout Mip Maps in the detail texture's import settings. Note that this also automatically switches the filter mode to trilinear, so that the fade to gray is gradual.
我们以前就做过这个!我们所需要做的就是在细节纹理的导入设置中启用 Fadeout Mip Maps 。请注意,这也会自动将过滤模式切换为三线性,从而使渐变为灰色的过程是逐渐的。


The grid makes the transition from detailed to not detailed very obvious, but you normally wouldn't notice it. For example, here is a main and a detail texture for a marble material. Grab them and use the same texture import settings we used for the grid textures.
网格使从有细节到无细节的过渡变得非常明显,但你通常不会注意到这一点。例如,这里是一个大理石材质的主纹理和细节纹理。获取它们并使用我们用于网格纹理的相同纹理导入设置。


Once our material uses these textures, the fading of the detail texture is no longer noticeable.
一旦我们的材质使用了这些纹理,细节纹理的褪色就不再那么明显了。


However, thanks to the detail texture, the marble looks much better up close.
然而,多亏了细节纹理,近看大理石的效果好了很多。


不带细节和带细节的特写。
Linear Color Space 线性色彩空间
Our shader works fine while we're rendering our scene in gamma color space, but it will go wrong if we switch to linear color space. Which color space you use is a project-wide setting. It is configured in the Other Settings panel of the player settings, which you can access via Edit / Project Settings / Player.
我们的着色器在以伽马色彩空间渲染场景时运行良好,但如果切换到线性色彩空间就会出错。您使用的色彩空间是项目范围的设置。它在播放器设置的 Other Settings 面板中配置,您可以通过 Edit / Project Settings / Player 访问该面板。

选择色彩空间。
Unity assumes that textures and colors are stored as sRGB. When rendering in gamma space, shaders directly access the raw color and texture data. This is what we assumed up to this point.
Unity 假定纹理和颜色以 sRGB 存储。在伽马空间中渲染时,着色器直接访问原始颜色和纹理数据。这正是我们目前为止所假定的。
When rendering in linear space, this is no longer true. The GPU will convert texture samples to linear space. Also, Unity will convert material color properties to linear space as well. The shader then operates with these linear colors. After that, the output of the fragment program will be converted back to gamma space.
在以线性空间渲染时,情况不再如此。GPU 会将纹理样本转换为线性空间。此外,Unity 也会将材质颜色属性转换为线性空间。然后,着色器将使用这些线性颜色进行操作。之后,片元程序的输出将被转换回伽马空间。
One of the advantages of using linear colors is that it enables more realistic lighting calculations. That's because light interactions are linear in real life, not exponential. Unfortunately, it screws up our detail material. After switching to linear space, it becomes much darker. Why does this happen?
使用线性颜色的一大优势是它能实现更真实的灯光计算。这是因为现实生活中的光线交互是线性的,而不是指数的。不幸的是,它破坏了我们的细节材质。切换到线性空间后,它变得暗了许多。这是为什么呢?


伽马与线性空间。
Because we double the detail texture sample, a value of ½ results in no change to the main texture. However, the conversion to linear space changes this to something near ½2.2 ≈ 0.22. Doubling that is roughly 0.44, which is much less that 1. That explains the darkening.
由于我们将细节纹理采样翻倍,因此值为 ½ 时主纹理不会发生变化。但是,转换为线性空间后,此值会变为接近 ½ 2.2 ≈ 0.22。将其翻倍约为 0.44,远小于 1。这解释了变暗的原因。
We could solve this error by enabling Bypass sRGB Sampling in the detail texture's import settings. This prevents the conversion from gamma to linear space, so the shader will always access the raw image data. However, the detail texture is an sRGB image, so the result would still be wrong.
我们可以通过在细节纹理的导入设置中启用 Bypass sRGB Sampling 来解决此错误。这可以防止从伽马空间到线性空间的转换,因此着色器将始终访问原始图像数据。然而,细节纹理是 sRGB 图像,因此结果仍然是错误的。
The best solution is to realign the detail colors so they're centered around 1 again. We can do that by multiplying by 1 / ½2.2 ≈ 4.59, instead of by 2. But we must only do this when we are rendering in linear space.
最佳解决方案是重新对齐细节颜色,使它们再次以 1 为中心。我们可以通过乘以 1 / ½ 2.2 ≈ 4.59 而不是乘以 2 来实现。但我们必须仅在线性空间中渲染时才这样做。
Fortunately, UnityCG defines a uniform variable which will contain the correct numbers to multiply with. It is a float4
which has either 2 or roughly 4.59 in its rgb components, as appropriate. As gamma correction is not applied to the alpha channel, it is always 2.
幸运的是, UnityCG 定义了一个统一变量,其中包含要相乘的正确数字。这是一个 float4
,其 rgb 分量中包含 2 或大约 4.59,视情况而定。由于伽马校正不应用于 Alpha 通道,因此它始终为 2。
color *= tex2D(_DetailTex, i.uvDetail) * unity_ColorSpaceDouble;
With that change, our detail material will look the same no matter which color space we're rendering in.
有了这个改变,我们的细节材质无论在哪个色彩空间中渲染,看起来都将保持不变。
Texture Splatting 纹理飞溅
A limitation of detail textures is that the same details are used for the entire surface. This works fine for a uniform surface, like a slab of marble. However, if your material does not have a uniform appearance, you don't want to use the same details everywhere.
细节纹理的一个局限性在于,整个表面都使用相同的细节。这对于均匀的表面(如一块大理石)来说效果很好。然而,如果你的材质外观不均匀,你就不想在所有地方都使用相同的细节。
Consider a large terrain. It can have grass, sand, rocks, snow, and so on. You want those terrain types to be fairly detailed up close. But a texture that covers the entire terrain will never have enough texels for that. You can solve that by using a separate texture for each surface type, and tile those. But how do you know which texture to use where?
考虑一大片地形。它上面可以有草地、沙地、岩石、雪等等。你希望这些地形类型在近距离观察时足够精细。但一张覆盖整个地形的纹理永远不会有足够的纹素来实现这一点。你可以通过为每种表面类型使用单独的纹理并对其进行平铺来解决这个问题。但你如何知道在何处使用哪种纹理呢?
Let's assume that we have a terrain with two different surface types. At every point, we have to decide which surface texture to use. Either the first, or the second. We could represent that with a boolean value. If it is set to true, we use the first texture, otherwise the second. We can use a grayscale texture to store this choice. A value of 1 represents the first texture, while a value of 0 represents the second texture. In fact, we can use these values to linearly interpolate between both textures. Then values in between 0 and 1 represent a blend between both textures. This makes smooth transitions possible.
假设我们有一个包含两种不同表面类型的地形。在每个点,我们都必须决定使用哪种表面纹理。要么是第一种,要么是第二种。我们可以用一个布尔值来表示。如果它被设置为 true,我们就使用第一种纹理,否则使用第二种。我们可以使用一个灰度纹理来存储这个选择。值为 1 表示第一种纹理,而值为 0 表示第二种纹理。事实上,我们可以使用这些值在线性插值两种纹理。那么 0 到 1 之间的值就表示两种纹理的混合。这使得平滑过渡成为可能。
Such a texture is know as a splat map. It's like you splatter multiple terrain features onto a canvas. Because of the interpolation, this map doesn't even require a high resolution. Here's a small example map.
这种纹理被称为“泼溅贴图”(splat map)。就像你把多种地形特征泼溅到一个画布上一样。由于插值作用,这种贴图甚至不需要高分辨率。这是一个小的示例贴图。

After adding it to your project, switch its import type to advanced. Enable Bypass sRGB Sampling and indicate that its mipmaps should be generated In Linear Space. This is required because the texture doesn't represent sRGB colors, but choices. So it should not be converted when rendering in linear space. Also, set its Wrap Mode to clamp, as we're not going to tile this map.
将它添加到项目中后,将其导入类型切换为高级。启用 Bypass sRGB Sampling 并指示应生成其 mipmap In Linear Space 。这是必需的,因为纹理不表示 sRGB 颜色,而是选择。因此,在以线性空间渲染时,不应进行转换。此外,将其 Wrap Mode 设置为钳制,因为我们不会平铺此贴图。

Create a new Texture Splatting shader by duplicating My First Shader
and changing its name. Because terrains are typically not uniformly tinted, let's get rid of that functionality.
通过复制 My First Shader
并更改名称,创建一个新的 Texture Splatting 着色器。因为地形通常不会均匀着色,所以让我们去除该功能。
Shader "Custom/Texture Splatting" { Properties {// _Tint ("Tint", Color) = (1, 1, 1, 1)_MainTex ("Splat Map", 2D) = "white" {} } SubShader { Pass { CGPROGRAM #pragma vertex MyVertexProgram #pragma fragment MyFragmentProgram #include "UnityCG.cginc"// float4 _Tint;sampler2D _MainTex; float4 _MainTex_ST; struct VertexData { float4 position : POSITION; float2 uv : TEXCOORD0; }; struct Interpolators { float4 position : SV_POSITION; float2 uv : TEXCOORD0; }; Interpolators MyVertexProgram (VertexData v) { Interpolators i; i.position = mul(UNITY_MATRIX_MVP, v.position); i.uv = TRANSFORM_TEX(v.uv, _MainTex); return i; } float4 MyFragmentProgram (Interpolators i) : SV_TARGET { return tex2D(_MainTex, i.uv);// * _Tint;} ENDCG } } }
Make a new material that uses this shader, and assign the splat map as its main texture. Because we haven't changed the shader yet, it will just show the map.
创建一个新材质,使用此着色器,并将涂抹贴图指定为其主纹理。因为我们尚未更改着色器,所以它将只显示该贴图。


展示了 splat 贴图。
Adding Textures 添加纹理
To be able to choose between two textures, we have to add them as properties to our shader. Let's just name them Texture1 and Texture2.
为了能够在两种纹理之间进行选择,我们必须将它们作为属性添加到着色器中。我们只需将它们命名为 Texture1 和 Texture2 。
Properties { _MainTex ("Splat Map", 2D) = "white" {} _Texture1 ("Texture 1", 2D) = "white" {} _Texture2 ("Texture 2", 2D) = "white" {} }
You can use any texture you want for them. I simply picked the grid and marble textures that we already have.
你可以为它们使用任何你想要的纹理。我只是简单地选择了我们已有的网格和大理石纹理。

两种额外的纹理。
Of course we get tiling and offset controls for each texture that we add to the shader. We could indeed support separate tiling and offset for every texture individually. But that would require us to pass more data from the vertex to the fragment shader, or to calculate the UV adjustments in the pixel shader. This is fine, but typically all textures of a terrain are tiled the same. And a splat map is not tiled at all. So we need only one instance of tiling and offset controls.
当然,对于我们添加到着色器中的每种纹理,我们都可以进行平铺和偏移控制。我们确实可以为每种纹理单独支持独立的平铺和偏移。但这需要我们从顶点着色器向片元着色器传递更多数据,或者在像素着色器中计算 UV 调整。这很好,但通常地形的所有纹理都以相同的方式平铺。而叠图根本不会平铺。所以我们只需要一组平铺和偏移控制。
You can add attributes to shader properties, just like in C# code. The NoScaleOffset
attribute will do as its name suggests. Yes, it does refer to tiling and offset as scale and offset. It's not very consistent naming.
你可以像在 C# 代码中一样,向着色器属性添加属性。 NoScaleOffset
属性会像它的名字所暗示的那样做。是的,它确实将平铺和偏移称为缩放和偏移。这命名不是很一致。
Let's add this attribute to our extra textures, and keep the tiling and offset inputs for the main texture.
让我们将此属性添加到我们的额外纹理中,并保留主纹理的平铺和偏移输入。
Properties { _MainTex ("Splat Map", 2D) = "white" {} [NoScaleOffset] _Texture1 ("Texture 1", 2D) = "white" {} [NoScaleOffset] _Texture2 ("Texture 2", 2D) = "white" {} }
The idea is that the tiling and offset controls appear at the top of our shader inspector. While they're next to the splat map, we'll actually apply them to the other textures. Put in some tiling, like 4.
我们的想法是,平铺和偏移控件将出现在着色器检视面板的顶部。虽然它们紧挨着飞溅贴图,但我们实际上会将它们应用于其他纹理。设置一些平铺,比如 4。

无额外的平铺和偏移控制。
Now we have to add the sampler variables to our shader code. But we don't have to add their corresponding _ST
variables.
现在我们必须将采样器变量添加到我们的着色器代码中。但我们不必添加它们相应的 _ST
变量。
sampler2D _MainTex; float4 _MainTex_ST; sampler2D _Texture1, _Texture2;
To check that we can indeed sample both textures this way, change the fragment shader so it adds them together.
为了检查我们是否确实能以这种方式对两种纹理进行采样,请更改片段着色器,使其将它们相加。
float4 MyFragmentProgram (Interpolators i) : SV_TARGET { return tex2D(_Texture1, i.uv) + tex2D(_Texture2, i.uv); }

Using the Splat Map
使用喷溅贴图
To sample the splat map, we have to also pass the unmodified UV from the vertex program to the fragment program.
为了采样喷溅贴图,我们还必须将未经修改的 UV 从顶点程序传递给片段程序。
struct Interpolators { float4 position : SV_POSITION; float2 uv : TEXCOORD0; float2 uvSplat : TEXCOORD1; }; Interpolators MyVertexProgram (VertexData v) { Interpolators i; i.position = mul(UNITY_MATRIX_MVP, v.position); i.uv = TRANSFORM_TEX(v.uv, _MainTex); i.uvSplat = v.uv; return i; }
We can then sample the splat map before sampling the other textures.
然后,我们可以在采样其他纹理之前采样喷溅贴图。
float4 MyFragmentProgram (Interpolators i) : SV_TARGET { float4 splat = tex2D(_MainTex, i.uvSplat); return tex2D(_Texture1, i.uv) + tex2D(_Texture2, i.uv); }
We decided that a value of 1 represents the first texture. As our splat map is monochrome, we can use any of the RGB channels to retrieve this value. Let's use the R channel and multiply it with the texture.
我们决定用值 1 代表第一个纹理。由于我们的涂抹贴图是单色的,我们可以使用任意一个 RGB 通道来获取这个值。让我们使用 R 通道并将其与纹理相乘。
return tex2D(_Texture1, i.uv) * splat.r + tex2D(_Texture2, i.uv);

调制第一张纹理。
The first texture is now modulated by the splat map. To complete the interpolation, we have to multiply the other texture with 1 - R.
第一张纹理现在被斑点图调制。为了完成插值,我们必须将另一张纹理乘以 1 - R。
return
tex2D(_Texture1, i.uv) * splat.r +
tex2D(_Texture2, i.uv) * (1 - splat.r);

调制两张纹理。
RGB Splat Map RGB 贴图
We have a functional splat material, but it only supports two textures. Can we support more? We're only using the R channel, so how about we add the G and B channels as well? Then (1,0,0) represents the first texture, (0,1,0) represents the second texture, and (0,0,1) represents a third texture. To get a correct interpolation between those three, we just have to make sure that the RGB channels always add up to 1.
我们有一个功能正常的贴图材质,但它只支持两种纹理。我们能支持更多吗?我们只使用了 R 通道,那么我们再加入 G 和 B 通道怎么样?这样(1,0,0)代表第一种纹理,(0,1,0)代表第二种纹理,(0,0,1)代表第三种纹理。为了在它们三者之间获得正确的插值,我们只需确保 RGB 通道的总和始终为 1。
But wait, when we used only one channel, we could support two textures. That's because the weight of the second texture was derived via 1 - R. This same trick works for any number of channels. So it is possible to support yet another texture, via 1 - R - G - B.
但等等,当我们只使用一个通道时,我们可以支持两种纹理。那是因为第二种纹理的权重是通过 1 - R 推导出来的。同样的技巧适用于任意数量的通道。所以通过 1 - R - G - B,还可以支持另一种纹理。
This leads to a splat map with three colors, and black. As long as the three channels added together don't exceed 1, it is a valid map. Here is such a map, grab it and use the same import settings as before.
这会生成一个带有三种颜色和黑色的贴图。只要三个通道的总和不超过 1,它就是一个有效的贴图。这是一个这样的贴图,获取它并使用与之前相同的导入设置。

To support RGB splat maps, we have to add two additional textures to our shader. I assigned the marble detail and the test texture to them.
为了支持 RGB 贴图,我们必须给着色器添加两个额外的纹理。我将大理石细节纹理和测试纹理分别分配给了它们。
Properties { _MainTex ("Splat Map", 2D) = "white" {} [NoScaleOffset] _Texture1 ("Texture 1", 2D) = "white" {} [NoScaleOffset] _Texture2 ("Texture 2", 2D) = "white" {} [NoScaleOffset] _Texture3 ("Texture 3", 2D) = "white" {} [NoScaleOffset] _Texture4 ("Texture 4", 2D) = "white" {} }

Add the required variables to the shader. Once again, no extra _ST
variables needed.
向着色器添加所需的变量。同样,不需要额外 _ST
的变量。
sampler2D _Texture1, _Texture2, _Texture3, _Texture4;
Inside the fragment program, add the extra texture samples. The second sample now uses the G channel and the third uses the B channel. The final sample is modulated with (1 - R - G - B).
在片元程序内部,添加额外的纹理采样。现在第二个采样使用 G 通道,第三个使用 B 通道。最终采样与(1 - R - G - B)进行调制。
return
tex2D(_Texture1, i.uv) * splat.r +
tex2D(_Texture2, i.uv) * splat.g +
tex2D(_Texture3, i.uv) * splat.b +
tex2D(_Texture4, i.uv) * (1 - splat.r - splat.g - splat.b);

Now you know how to apply detail textures and how to blend multiple textures with a splat map. It is also possible to combine these approaches.
现在你已经知道如何应用细节纹理以及如何使用泼溅贴图混合多个纹理。这些方法也可以结合起来。
You could add four detail textures to the splat shader and use the map to blend between them. Of course this requires four additional texture samples, so it doesn't come for free.
你可以向泼溅着色器添加四张细节纹理,并使用贴图在它们之间进行混合。当然,这需要额外的四次纹理采样,所以它不是免费的。
You could also use a map to control where a detail texture is applied, and where it is omitted. In that case, you need a monochrome map and it functions as a mask. This is useful when a single texture contains regions that represent different materials, but on not as large a scale as a terrain. For example, if our marble texture also contained pieces of metal, you wouldn't want the marble details to be applied there.
你也可以使用贴图来控制细节纹理的应用位置,以及它被忽略的位置。在这种情况下,你需要一个单色贴图,它 berfungsi 充当遮罩。当单个纹理包含代表不同材质的区域时,这很有用,但规模不像地形那么大。例如,如果我们的大理石纹理还包含金属碎片,你不会希望大理石细节应用在那里。
The next tutorial is The First Light.
下一个教程是《第一道光》。