这是用户在 2025-6-7 17:06 为 https://catlikecoding.com/unity/tutorials/rendering/part-15/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Catlike Coding
published 2017-04-30  发布于 2017-04-30

Rendering 15

Deferred Lights


渲染 15 延迟光照
  • Use a custom light shader.
    使用自定义光照着色器。
  • Decode LDR colors.  解码 LDR 颜色。
  • Add lighting in a separate pass.
    在单独的通道中添加光照。
  • Support directional, spotlight, and point lights.
    支持平行光、聚光灯和点光源。
  • Manually sample shadow maps.
    手动采样阴影贴图。

This is part 15 of a tutorial series about rendering. In the previous installment, we added fog. Now we'll create our own deferred lights.
本教程系列《渲染》的第 15 部分。在上一部分中,我们添加了雾效。现在我们将创建自己的延迟光照。

From now on, the Rendering tutorials are made with Unity 5.6.0. This Unity version changes a few things in both the editor and shaders, but you should still be able to find your way.
从现在起,渲染教程将使用 Unity 5.6.0 制作。这个 Unity 版本在编辑器和着色器方面都有一些改动,但你仍然应该能找到方向。

Playing with our own deferred lights.
使用我们自己的延迟光照。

Light Shader  光照着色器

We added support for the deferred rendering path in Rendering 13, Deferred Shading. All we had to do was fill the G-buffers. The lights were rendered later. The tutorial briefly explained how those lights were added by Unity. This time, we'll render these lights ourselves.
我们在《Rendering 13:延迟着色》中添加了对延迟渲染路径的支持。我们所要做的就是填充 G-缓冲区。光照稍后才渲染。教程简要解释了 Unity 如何添加这些光照。这次,我们将自己渲染这些光照。

To test the lights, I'll use a simple scene with its ambient intensity set to zero. It is rendered with a deferred HDR camera.
为了测试光源,我将使用一个环境光强度设置为零的简单场景。它将通过延迟 HDR 摄像机渲染。

lit unlit
Test scene, with and without directional light.
测试场景,有方向光和无方向光。

All objects in the scene are rendered to the G-buffers with our own shader. But the lights are rendered with Unity's default deferred shader, which is named Hidden / Internal-DefferedShader. You can verify this by going to the graphics settings via Edit / Project Settings / Graphics and switching the Deferred shader mode to Custom shader.
场景中的所有对象都通过我们自己的着色器渲染到 G 缓冲区。但光源是使用 Unity 的默认延迟着色器渲染的,该着色器名为 Hidden / Internal-DefferedShader 。你可以通过 Edit / Project Settings / Graphics 进入图形设置,并将 Deferred 着色器模式切换到 Custom shader 来验证这一点。

Default deferred light shader.
默认延迟光源着色器。

Using a Custom Shader
使用自定义着色器

Each deferred light is rendered in a separate pass, modifying the colors of the image. Effectively, they're image effects, like our deferred fog shader from the previous tutorial. Let's start with a simple shader that overwrites everything with black.
每个延迟光照都在单独的通道中渲染,修改图像的颜色。实际上,它们是图像效果,就像我们上一个教程中的延迟雾着色器。我们从一个简单的着色器开始,它会用黑色覆盖所有内容。

Shader "Custom/DeferredShading" {
	
	Properties {
	}

	SubShader {

		Pass {
			Cull Off
			ZTest Always
			ZWrite Off
			
			CGPROGRAM

			#pragma target 3.0
			#pragma vertex VertexProgram
			#pragma fragment FragmentProgram
			
			#pragma exclude_renderers nomrt
			
			#include "UnityCG.cginc"

			struct VertexData {
				float4 vertex : POSITION;
			};

			struct Interpolators {
				float4 pos : SV_POSITION;
			};

			Interpolators VertexProgram (VertexData v) {
				Interpolators i;
				i.pos = UnityObjectToClipPos(v.vertex);
				return i;
			}

			float4 FragmentProgram (Interpolators i) : SV_Target {
				return 0;
			}

			ENDCG
		}
	}
}

Instruct Unity to use this shader when rendering deferred lights.
指示 Unity 在渲染延迟光照时使用此着色器。

Using our custom shader.
使用我们的自定义着色器。

A Second Pass  第二趟

After switching to our shader, Unity complains that it doesn't have enough passes. Apparently, a second pass is needed. Let's just duplicate the pass that we already have and see what happens.
切换到我们的着色器后,Unity 抱怨它的趟数不够。显然,需要第二趟。我们只需复制现有的一趟,看看会发生什么。

		Pass {
			…
		}

		Pass {
			
		}

Unity now accepts our shader and uses it to render the directional light. As a result, everything becomes black. The only exception is the sky. The stencil buffer is used as a mask to avoid rendering there, because the directional light doesn't affect the background.
Unity 现在接受了我们的着色器,并用它来渲染方向光。结果,所有东西都变成了黑色。唯一的例外是天空。模板缓冲区被用作遮罩,以避免在那里渲染,因为方向光不影响背景。

lit unlit
Custom shader, lit and unlit.
自定义着色器,有光照和无光照。

But what about that second pass? Remember that when HDR is disabled, light data is logarithmically encoded. A final pass is needed to reverse this encoding. That's what the second pass is for. So if you disabled HDR for the camera, the second pass of our shader will also be used, once.
但是第二次渲染呢?请记住,当禁用 HDR 时,光数据是对数编码的。需要进行最后一次渲染来反转这种编码。这就是第二次渲染的目的。因此,如果您禁用了摄像机的 HDR,我们的着色器的第二次渲染也将使用一次。

Avoiding the Sky  避免天空

When rendering in LDR mode, you might see the sky turn black too. This can happen in the scene view or the game view. If the sky turns black, the conversion pass doesn't correctly use the stencil buffer as a mask. To fix this, explicitly configure the stencil settings of the second pass. We should only render when we're dealing with a fragment that's not part of the background. The appropriate stencil value is provided via _StencilNonBackground.
在 LDR 模式下渲染时,您可能会看到天空也变黑。这可能发生在场景视图或游戏视图中。如果天空变黑,转换渲染不会正确地使用模板缓冲区作为遮罩。要解决此问题,请明确配置第二次渲染的模板设置。我们只应在处理不属于背景的片段时进行渲染。通过 _StencilNonBackground 提供相应的模板值。

		Pass {
			Cull Off
			ZTest Always
			ZWrite Off

			Stencil {
				Ref [_StencilNonBackground]
				ReadMask [_StencilNonBackground]
				CompBack Equal
				CompFront Equal
			}
			
			…
		}

Converting Colors  转换颜色

To make the second pass work, we have to convert the data in the light buffer. Like our fog shader, a full-screen quad is drawn with UV coordinates that we can use to sample the buffer.
为了使第二次渲染生效,我们必须转换光照缓冲区中的数据。与我们的雾着色器类似,我们绘制了一个全屏四边形,其带有可用作采样缓冲区的 UV 坐标。

			struct VertexData {
				float4 vertex : POSITION;
				float2 uv : TEXCOORD0;
			};

			struct Interpolators {
				float4 pos : SV_POSITION;
				float2 uv : TEXCOORD0;
			};

			Interpolators VertexProgram (VertexData v) {
				Interpolators i;
				i.pos = UnityObjectToClipPos(v.vertex);
				i.uv = v.uv;
				return i;
			}

The light buffer itself is made available to the shader via the _LightBuffer variable.
光照缓冲区本身通过 _LightBuffer 变量提供给着色器。

			sampler2D _LightBuffer;float4 FragmentProgram (Interpolators i) : SV_Target {
				return tex2D(_LightBuffer, i.uv);
			}
Raw LDR data, when unlit.
未打光时,原始的 LDR 数据。

LDR colors are logarithmically encoded, using the formula 2-C. To decode this, we have to use the formula -log2 C.
LDR 色彩使用公式 2 -C 进行对数编码。要对其进行解码,我们必须使用公式-log 2 C。

				return -log2(tex2D(_LightBuffer, i.uv));
Decoded unlit LDR image.
解码的无光 LDR 图像。

Now that we know that it works, enable HDR again.
既然我们知道它能工作,那就再次启用 HDR。

unitypackage

Directional Lights  方向光

The first pass takes care of rendering the lights, so it's going to be fairly complicated. Let's create an include file for it, named MyDeferredShading.cginc. Copy all code from the pass to this file.
第一遍渲染负责处理光照,所以会相当复杂。我们为其创建一个名为 MyDeferredShading.cginc 的包含文件。将所有代码从该渲染通道复制到此文件。

#if !defined(MY_DEFERRED_SHADING)
#define MY_DEFERRED_SHADING

#include "UnityCG.cginc"



#endif

Then include MyDeferredShading in the first pass.
然后在首次遍历中包含 MyDeferredShading

		Pass {
			Cull Off
			ZTest Always
			ZWrite Off

			CGPROGRAM

			#pragma vertex VertexProgram
			#pragma fragment FragmentProgram

			#pragma exclude_renderers nomrt

			#include "MyDeferredShading.cginc"

			ENDCG
		}

Because we're supposed to add light to the image, we have to make sure that we don't erase what's already been rendered. We can do so by changing the blend mode to combine the full source and destination colors.
因为我们需要给图像添加光照,所以必须确保不会擦除已经渲染的内容。我们可以通过改变混合模式来实现这一点,从而组合完整的源颜色和目标颜色。

			Blend One One
			Cull Off
			ZTest Always
			ZWrite Off

We need shader variants for all possible light configurations. The multi_compile_lightpass compiler directive creates all keyword combinations that we need. The only exception is HDR mode. We have to add a separate multi-compile directive for that.
我们需要为所有可能的光照配置准备着色器变体。 multi_compile_lightpass 编译指令会创建我们需要的所有关键字组合。唯一的例外是 HDR 模式。我们必须为此添加一个单独的多重编译指令。

			#pragma exclude_renderers nomrt

			#pragma multi_compile_lightpass
			#pragma multi_compile _ UNITY_HDR_ON

Although this shader is used for all three light types, we'll first limit ourselves to directional lights only.
尽管这个着色器可用于所有三种光源类型,但我们首先将其限制为仅方向光。

G-Buffer UV Coordinates  G-Buffer UV 坐标

We need UV coordinates to sample from the G-buffers. Unfortunately, Unity doesn't supply light passes with convenient texture coordinates. Instead, we have to derive them from the clip-space position. To do so, we can use the ComputeScreenPos, which is defined in UnityCG. This function produces homogeneous coordinates, just like the clip-space coordinates, so we have to use a float4 to store them.
我们需要 UV 坐标来对 G 缓冲区进行采样。不幸的是,Unity 没有为光照通道提供方便的纹理坐标。相反,我们必须从剪辑空间位置推导它们。为此,我们可以使用在 UnityCG 中定义的 ComputeScreenPos 。这个函数生成齐次坐标,就像剪辑空间坐标一样,所以我们必须使用一个 float4 来存储它们。

struct Interpolators {
	float4 pos : SV_POSITION;
	float4 uv : TEXCOORD0;
};

Interpolators VertexProgram (VertexData v) {
	Interpolators i;
	i.pos = UnityObjectToClipPos(v.vertex);
	i.uv = ComputeScreenPos(i.pos);
	return i;
}

In the fragment program, we can compute the final 2D coordinates. As explained in Rendering 7, Shadows, this has to happen after interpolation.
在片段程序中,我们可以计算最终的 2D 坐标。正如《渲染 7:阴影》中所解释的,这必须在插值之后进行。

float4 FragmentProgram (Interpolators i) : SV_Target {
	float2 uv = i.uv.xy / i.uv.w;

	return 0;
}

World Position  世界位置

When we created our deferred fog image effect, we had to figure out the fragment's distance from the camera. We did so by shooting rays from the camera through each fragment to the far plane, then scaling those by the fragment's depth value. We can use the same approach here to reconstruct the fragment's world position.
当我们创建延迟雾图像效果时,我们必须计算片段到相机的距离。我们通过从相机到远平面穿过每个片段发射射线,然后根据片段的深度值对这些射线进行缩放来完成此操作。我们可以在这里使用相同的方法来重建片段的世界世界位置。

In the case of directional lights, the rays for the four vertices of the quad are supplied as normal vectors. So we can just pass them through the vertex program and interpolate them.
对于方向光,由法线向量形式的四边形四个顶点的光线提供。因此,我们可以直接将它们传递给顶点程序并进行插值。

struct VertexData {
	float4 vertex : POSITION;
	float3 normal : NORMAL;
};

struct Interpolators {
	float4 pos : SV_POSITION;
    float4 uv : TEXCOORD0;
    float3 ray : TEXCOORD1;
};

Interpolators VertexProgram (VertexData v) {
	Interpolators i;
	i.pos = UnityObjectToClipPos(v.vertex);
	i.uv = ComputeScreenPos(i.pos);
	i.ray = v.normal;
	return i;
}

We can find the depth value in the fragment program by sampling the _CameraDepthTexture texture and linearizing it, just like we did for the fog effect.
我们可以在片段程序中通过采样 _CameraDepthTexture 纹理并将其线性化来找到深度值,就像我们处理雾效时一样。

UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);float4 FragmentProgram (Interpolators i) : SV_Target {
	float2 uv = i.uv.xy / i.uv.w;
	
	float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv);
	depth = Linear01Depth(depth);

	return 0;
}

However, a big difference is that we supplied rays that reached the far plane to our fog shader. In this case, we are supplied with rays that reach the near plane. We have to scale them so we get rays that reach the far plane. This can be done by scaling the ray so its Z coordinate becomes 1, and multiplying it with the far plane distance.
然而,一个很大的区别是,我们为雾着色器提供的是抵达远平面的光线。而在此情况下,我们得到的是抵达近平面的光线。我们必须对它们进行缩放,才能得到抵达远平面的光线。这可以通过缩放光线使其 Z 坐标变为 1,并将其乘以远平面距离来实现。

	depth = Linear01Depth(depth);

	float3 rayToFarPlane = i.ray * _ProjectionParams.z / i.ray.z;

Scaling this ray by the depth value gives us a position. The supplied rays are defined in view space, which is the camera's local space. So we end up with the fragment's position in view space as well.
将这条光线乘以深度值可得到一个位置。所提供的光线是在视图空间中定义的,视图空间是摄像机的局部空间。因此,我们最终得到的是片段在视图空间中的位置。

	float3 rayToFarPlane = i.ray * _ProjectionParams.z / i.ray.z;
	float3 viewPos = rayToFarPlane * depth;

The conversion from this space to world space is done with the unity_CameraToWorld matrix, which is defined in ShaderVariables.
从这个空间到世界空间的转换是通过 unity_CameraToWorld 矩阵完成的,该矩阵在 ShaderVariables 中定义。

	float3 viewPos = rayToFarPlane * depth;
	float3 worldPos = mul(unity_CameraToWorld, float4(viewPos, 1)).xyz;

Reading G-Buffer Data  读取 G 缓冲区数据

Next, we need access to the G-buffers to retrieve the surface properties. The buffers are made available via three _CameraGBufferTexture variables.
接下来,我们需要访问 G 缓冲区以检索表面属性。这些缓冲区通过三个 _CameraGBufferTexture 变量提供。

sampler2D _CameraGBufferTexture0;
sampler2D _CameraGBufferTexture1;
sampler2D _CameraGBufferTexture2;

We filled these same buffers in the Rendering 13, Deferred Shader tutorial. Now we get to read from them. We need the albedo, specular tint, smoothness, and normal.
我们在《渲染 13》和延迟着色器教程中填充了相同的缓冲区。现在我们可以从中读取数据了。我们需要反照率、镜面反射颜色、光滑度和法线。

	float3 worldPos = mul(unity_CameraToWorld, float4(viewPos, 1)).xyz;

	float3 albedo = tex2D(_CameraGBufferTexture0, uv).rgb;
	float3 specularTint = tex2D(_CameraGBufferTexture1, uv).rgb;
	float3 smoothness = tex2D(_CameraGBufferTexture1, uv).a;
	float3 normal = tex2D(_CameraGBufferTexture2, uv).rgb * 2 - 1;

Computing BRDF  计算 BRDF

The BRDF functions are defined in UnityPBSLighting, so we'll have to include that file.
BRDF 函数定义在 UnityPBSLighting 中,所以我们必须包含该文件。

//#include "UnityCG.cginc"
#include "UnityPBSLighting.cginc"

Now we only need three more bits of data before we can invoke the BRDF function in our fragment program. First is the view direction, which is found as usual.
现在,在我们的片段程序中调用 BRDF 函数之前,我们只需要再获取三个数据。第一个是视图方向,这与往常一样。

	float3 worldPos = mul(unity_CameraToWorld, float4(viewPos, 1)).xyz;
	float3 viewDir = normalize(_WorldSpaceCameraPos - worldPos);

Second is the surface reflectivity. We derive that from the specular tint. It's simply the strongest color component. We can use the SpecularStrength function to extract it.
其次是表面反射率。我们从镜面色调中得出它。它只是最强的颜色分量。我们可以使用 SpecularStrength 函数来提取它。

	float3 albedo = tex2D(_CameraGBufferTexture0, uv).rgb;
	float3 specularTint = tex2D(_CameraGBufferTexture1, uv).rgb;
	float3 smoothness = tex2D(_CameraGBufferTexture1, uv).a;
	float3 normal = tex2D(_CameraGBufferTexture2, uv).rgb * 2 - 1;
	float oneMinusReflectivity = 1 - SpecularStrength(specularTint);

Third, we need the light data. Let's start with dummy lights.
第三,我们需要光源数据。让我们从虚拟光源开始。

	float oneMinusReflectivity = 1 - SpecularStrength(specularTint);

	UnityLight light;
	light.color = 0;
	light.dir = 0;
	UnityIndirect indirectLight;
	indirectLight.diffuse = 0;
	indirectLight.specular = 0;

Finally, we can compute the contribution of the light for this fragment, using the BRDF function.
最后,我们可以使用 BRDF 函数计算光照对该片段的贡献。

	indirectLight.specular = 0;

	float4 color = UNITY_BRDF_PBS(
    	albedo, specularTint, oneMinusReflectivity, smoothness,
    	normal, viewDir, light, indirectLight
    );

	return color;

Configuring the Light  配置光源

Indirect light is not applicable here, so it remains black. But the direct light has to be configured so it matches the light that's currently being rendered. For a directional light, we need a color and a direction. These are made available via the _LightColor and _LightDir variables.
间接光照在此处不适用,因此它保持黑色。但直接光照必须进行配置,以使其与当前渲染的光源匹配。对于方向光,我们需要颜色和方向。这些通过 _LightColor_LightDir 变量提供。

float4 _LightColor, _LightDir;

Let's create a separate function to setup the light. Simply copy the variables into a light structure and return it.
我们来创建一个单独的函数来设置光源。只需将变量复制到一个光源结构中并返回即可。

UnityLight CreateLight () {
	UnityLight light;
	light.dir = _LightDir;
	light.color = _LightColor.rgb;
	return light;
}

Use this function in the fragment program.
在片元程序中使用此函数。

	UnityLight light = CreateLight();
//	light.color = 0;
//	light.dir = 0;
Light from the wrong direction.
来自错误方向的光线。

We finally get lighting, but it appears to come from the wrong direction. This happens because _LightDir is set to the direction in which the light is traveling. For our calculations, we need the direction from the surface to the light, so the opposite.
我们终于得到了光照,但它似乎来自错误的方向。发生这种情况是因为 _LightDir 被设置为光线传播的方向。而对于我们的计算,我们需要的是从表面到光源的方向,即相反的方向。

	light.dir = -_LightDir;
Directional light, without shadows.
无阴影方向光。

Shadows  阴影

In My Lighting, we relied on the macros from AutoLight to determine the light attenuation caused by shadows. Unfortunately, that file wasn't written with deferred lights in mind. So we'll do the shadow sampling ourselves. The shadow map can be accessed via the _ShadowMapTexture variable.
My Lighting 中,我们依靠 AutoLight 的宏来确定阴影导致的光衰减。不幸的是,该文件在编写时没有考虑到延迟光照。因此,我们将自行进行阴影采样。阴影贴图可以通过 _ShadowMapTexture 变量访问。

sampler2D _ShadowMapTexture;

However, we cannot indiscriminately declare this variable. It is already defined for point and spotlight shadows in UnityShadowLibrary, which we indirectly include. So we should not define it ourselves, except when working with shadows for directional lights.
但是,我们不能不加区分地声明此变量。它已在 UnityShadowLibrary 中为点光源和聚光灯阴影定义,而我们间接包含了该文件。因此,我们不应自行定义它,除非在处理平行光阴影时。

#if defined (SHADOWS_SCREEN)
	sampler2D _ShadowMapTexture;
#endif

To apply directional shadows, we simply have to sample the shadow texture and use it to attenuate the light color. Doing this in CreateLight means that the UV coordinates have to be added to it as a parameter.
要应用平行光阴影,我们只需采样阴影纹理并用它来衰减光照颜色。在 CreateLight 中执行此操作意味着必须将 UV 坐标作为参数添加到其中。

UnityLight CreateLight (float2 uv) {
	UnityLight light;
	light.dir = -_LightDir;
	float shadowAttenuation = tex2D(_ShadowMapTexture, uv).r;
	light.color = _LightColor.rgb * shadowAttenuation;
	return light;
}

Pass the UV coordinates to it in the fragment program.
在片段程序中将 UV 坐标传递给它。

	UnityLight light = CreateLight(uv);
Directional light with shadows.
带阴影的方向光。

Of course this is only valid when the directional light has shadows enabled. If not, the shadow attenuation is always 1.
当然,这仅在定向光启用了阴影时才有效。如果没有,阴影衰减始终为 1。

	float shadowAttenuation = 1;
	#if defined(SHADOWS_SCREEN)
		shadowAttenuation = tex2D(_ShadowMapTexture, uv).r;
	#endif
	light.color = _LightColor.rgb * shadowAttenuation;

Fading Shadows  渐隐阴影

The shadow map is finite. It cannot cover the entire world. The larger an area it covers, the lower the resolution of the shadows. Unity has a maximum distance up to which shadows are drawn. Beyond that, there are no real-time shadows. This distance can be adjust via Edit / Project Settings / Quality.
阴影贴图是有限的。它无法覆盖整个世界。它覆盖的区域越大,阴影的分辨率就越低。Unity 有一个绘制阴影的最大距离。超过该距离,就没有实时阴影。这个距离可以通过 Edit / Project Settings / Quality 进行调整。

Shadow distance quality setting.
阴影距离质量设置。

When shadows approach this distance, they fade out. At least, that's what Unity's shaders do. Because we're manually sampling the shadow map, our shadows get truncated when the edge of the map is reached. The result is that shadows get sharply cut off or are missing beyond the fade distance.
当阴影接近这个距离时,它们就会淡出。至少,Unity 的着色器是这样做的。因为我们是手动采样阴影贴图,所以当达到贴图边缘时,我们的阴影会被截断。结果是,阴影在淡出距离之外会被锐利地切断或缺失。

complete disappearing
Large and small shadow distance.
大阴影距离和小阴影距离。

To fade the shadows, we must first know the distance at which they should be completely gone. This distance depends on how the directional shadows are projected. In Stable Fit mode the fading is spherical, centered on the middle of the map. In Close Fit mode it's based on the view depth.
为了使阴影逐渐消失,我们必须首先知道它们应该完全消失的距离。这个距离取决于定向阴影的投影方式。在 Stable Fit 模式下,淡出是球形的,以地图中心为中心。在 Close Fit 模式下,它是基于视图深度的。

The UnityComputeShadowFadeDistance function can figure out the correct metric for us. It has the world position and view depth as parameters. It will either return the distance from the shadow center, or the unmodified view depth.
UnityComputeShadowFadeDistance 函数可以为我们计算出正确的度量。它以世界位置和视图深度作为参数。它将返回阴影中心的距离,或未修改的视图深度。

UnityLight CreateLight (float2 uv, float3 worldPos, float viewZ) {
	UnityLight light;
	light.dir = -_LightDir;
	float shadowAttenuation = 1;
	#if defined(SHADOWS_SCREEN)
		shadowAttenuation = tex2D(_ShadowMapTexture, uv).r;

		float shadowFadeDistance =
			UnityComputeShadowFadeDistance(worldPos, viewZ);
	#endif
	light.color = _LightColor.rgb * shadowAttenuation;
	return light;
}

The shadows should begin to fade as they approach the fade distance, completely disappearing once they reach it. The UnityComputeShadowFade function calculates the appropriate fade factor.
阴影应该在接近淡出距离时开始淡出,一旦达到该距离就完全消失。 UnityComputeShadowFade 函数计算出适当的淡出因子。

		float shadowFadeDistance =
			UnityComputeShadowFadeDistance(worldPos, viewZ);
		float shadowFade = UnityComputeShadowFade(shadowFadeDistance);

The shadow fade factor is a value from 0 to 1, which indicates how much the shadows should fade away. The actual fading can be done by simply adding this value to the shadow attenuation, and clamping to 0–1.
阴影渐隐系数是一个 0 到 1 之间的值,表示阴影应该渐隐多少。实际的渐隐可以通过简单地将此值添加到阴影衰减中,并将其限制在 0-1 之间来完成。

		float shadowFade = UnityComputeShadowFade(shadowFadeDistance);
		shadowAttenuation = saturate(shadowAttenuation + shadowFade);

To make this work, supply the world position and view depth to CreateLight in our fragment program. The view depth is the Z component of the fragment's position in view space.
为了实现这一点,在我们的片段程序中向 CreateLight 提供世界位置和视图深度。视图深度是片段在视图空间中位置的 Z 分量。

	UnityLight light = CreateLight(uv, worldPos, viewPos.z);
Fading shadows.  消退的阴影。

Light Cookies  光照贴图

Another thing that we have to support are light cookies. The cookie texture is made available via _LightTexture0. Besides that, we also have to convert from world to light space, so we can sample the texture. The transformation for that is made available via the unity_WorldToLight matrix variable.
我们必须支持的另一项功能是光照贴图。光照贴图纹理可通过 _LightTexture0 获取。此外,我们还需要将世界坐标转换为光照空间坐标,以便对纹理进行采样。这种转换通过 unity_WorldToLight 矩阵变量提供。

sampler2D _LightTexture0;
float4x4 unity_WorldToLight;

In CreateLight, use the matrix to convert the world position to light-space coordinates. Then use those to sample the cookie texture. Let's use a separate attenuation variable to keep track of the cookie's attenuation.
CreateLight 中,使用矩阵将世界坐标转换为光照空间坐标。然后使用这些坐标对光照贴图纹理进行采样。让我们使用一个单独的 attenuation 变量来跟踪光照贴图的衰减。

	light.dir = -_LightDir;
	float attenuation = 1;
	float shadowAttenuation = 1;
	
	#if defined(DIRECTIONAL_COOKIE)
		float2 uvCookie = mul(unity_WorldToLight, float4(worldPos, 1)).xy;
		attenuation *= tex2D(_LightTexture0, uvCookie).w;
	#endif

	…
	
	light.color = _LightColor.rgb * (attenuation * shadowAttenuation);
Directional light with cookie.
带纹理(cookie)的平行光。

The results appear good, except when you pay close attention to geometry edges.
效果看起来不错,但仔细观察几何体边缘时会发现问题。

Artifacts along edges.  沿边缘出现伪影。

These artifacts appear when there is a large difference between the cookie coordinates of adjacent fragments. In those cases, the GPU chooses a mipmap level that is too low for the closest surface. Aras Pranckevičius figured this one out for Unity. The solution Unity uses is to apply a bias when sampling mip maps, so we'll do that too.
当相邻片段的 cookie 坐标之间存在较大差异时,会出现这些伪影。在这些情况下,GPU 会选择一个对于最近曲面来说太低的 mipmap 级别。Aras Pranckevičius 为 Unity 解决了这个问题。Unity 使用的解决方案是在采样 mip 贴图时应用偏差,所以我们也会这样做。

		attenuation *= tex2Dbias(_LightTexture0, float4(uvCookie, 0, -8)).w;
Biased cookie sampling.  有偏差的 cookie 采样。

Supporting LDR  支持 LDR

By now we can correctly render directional lights, but only in HDR mode. It goes wrong for LDR.
现在我们可以正确地渲染定向光,但仅限于 HDR 模式。LDR 模式下会出现问题。

Incorrect LDR colors.  不正确的 LDR 颜色。

First, the encoded LDR colors have to be multiplied into the light buffer, instead of added. We can do so by changing the blend mode of our shader to Blend DstColor Zero. However, if we do that then HDR rendering will go wrong. Instead, we'll have to make the blend mode variable. Unity uses _SrcBlend and _DstBlend for this.
首先,已编码的 LDR 颜色必须乘入光照缓冲区,而不是相加。我们可以通过将着色器的混合模式更改为 Blend DstColor Zero 来做到这一点。但是,如果这样做,HDR 渲染就会出错。因此,我们必须使混合模式可变。Unity 为此使用了 _SrcBlend_DstBlend

			Blend [_SrcBlend] [_DstBlend]
Different, but still incorrect.
不同,但仍然不正确。

We also have to apply the 2-C conversion at the end of our fragment program, when UNITY_HDR_ON is not defined.
我们还需要在片段程序结束时应用 2 -C 转换,当 UNITY_HDR_ON 未定义时。

	float4 color = UNITY_BRDF_PBS(
    	albedo, specularTint, oneMinusReflectivity, smoothness,
    	normal, viewDir, light, indirectLight
    );
    #if !defined(UNITY_HDR_ON)
		color = exp2(-color);
	#endif
	return color;
unitypackage

Spotlights  聚光灯

Because directional lights affect everything, they are drawn as full-screen quads. In contrast, spotlights affect only the part of the scene that lies inside their cone. It is usually unnecessary to calculate spotlight lighting for the entire image. Instead, a pyramid is rendered that matches the spotlight's area of influence.
因为方向光会影响所有物体,所以它们以全屏四边形的形式绘制。相比之下,聚光灯只影响锥形范围内的场景部分。通常没有必要为整个图像计算聚光灯照明。相反,会渲染一个与聚光灯影响区域相匹配的金字塔。

Drawing a Pyramid  绘制金字塔

Disable the directional light and use a spotlight instead. Because our shader only works correctly for directional lights, the result will be wrong. But it allows you to see which parts of the pyramid get rendered.
禁用平行光,改用聚光灯。因为我们的着色器只对平行光有效,所以结果会是错误的。但它能让你看到金字塔的哪些部分被渲染出来。

Parts of a pyramid.
金字塔的构成部分。

It turns out that the pyramid is rendered as a regular 3D object. Its back faces are culled, so we see the pyramid's front side. And it's only drawn when there's nothing in front of it. Besides that, a pass is added which sets the stencil buffer to limit the drawing to fragments that lie inside the pyramid volume. You can verify these settings via the frame debugger.
结果发现,金字塔是作为一个常规的 3D 对象渲染的。它的背面被剔除,所以我们看到的是金字塔的正面。它只有在前面没有物体时才会被绘制。除此之外,还增加了一个通道,该通道设置模板缓冲区,将绘制限制在金字塔体积内的片元。你可以通过帧调试器验证这些设置。

How it is drawn.
它是如何绘制的。

This means that the culling and z-test settings of our shader are overruled. So let's just remove them from our shader.
这意味着我们着色器的剔除和 Z 测试设置被覆盖了。所以我们只需将它们从着色器中移除即可。

			Blend [_SrcBlend] [_DstBlend]
//			Cull Off
//			ZTest Always
			ZWrite Off

This approach works when the spotlight volume is sufficiently far away from the camera. However, it fails when the light gets too close to the camera. When that happens, the camera could end up inside the volume. It is even possible that part of the near plane lies inside it, while the rest lies outside of it. In these cases, the stencil buffer cannot be used to limit the rendering.
当聚光灯体积与摄像机距离足够远时,这种方法是可行的。然而,当光源离摄像机太近时,它就会失效。当发生这种情况时,摄像机可能会进入体积内部。甚至可能近裁剪面的一部分位于其内部,而其余部分位于其外部。在这些情况下,模板缓冲区不能用于限制渲染。

The trick used to still render the light is to draw the inside surface of the pyramid, instead of its outside surface. This is done by rendering its back faces instead of its front faces. Also, these surfaces are only rendered when they end up behind what's already rendered. This approach also covers all fragments that lie inside the spotlight's volume. But it ends up rendering too many fragments, as normally hidden parts of the pyramid now also get rendered. So it's only done when necessary.
为了仍然渲染光源,所用的技巧是绘制金字塔的内表面,而不是外表面。这是通过渲染其背面而不是正面来实现的。此外,这些表面只有在它们最终出现在已渲染内容之后时才会被渲染。这种方法也覆盖了聚光灯体积内部的所有片段。但它最终会渲染过多的片段,因为金字塔中通常隐藏的部分现在也被渲染了。所以只有在必要时才这样做。

scene frame debugger
Drawing the backside when close to the camera.
在接近相机时绘制背面。

If you move the camera or spotlight around near each other, you'll see Unity switch between these two rendering methods as needed. Once our shader works correctly for spotlights, there will be no visual difference between both approaches.
如果你在 Unity 中移动摄像机或聚光灯,你会看到 Unity 根据需要在这两种渲染方法之间切换。一旦我们的着色器能正确支持聚光灯,这两种方法之间就不会有视觉差异。

Supporting Multiple Light Types
支持多种光源类型

Currently, CreateLight only works for directional lights. Let's make sure that the code specific to directional lights is only used when appropriate.
目前, CreateLight 只适用于平行光。我们需确保只在适当的时候使用平行光专属代码。

UnityLight CreateLight (float2 uv, float3 worldPos, float viewZ) {
	UnityLight light;
//	light.dir = -_LightDir;
	float attenuation = 1;
	float shadowAttenuation = 1;

	#if defined(DIRECTIONAL) || defined(DIRECTIONAL_COOKIE)
		light.dir = -_LightDir;

		#if defined(DIRECTIONAL_COOKIE)
			float2 uvCookie = mul(unity_WorldToLight, float4(worldPos, 1)).xy;
			attenuation *= tex2Dbias(_LightTexture0, float4(uvCookie, 0, -8)).w;
		#endif

		#if defined(SHADOWS_SCREEN)
			shadowed = true;
			shadowAttenuation = tex2D(_ShadowMapTexture, uv).r;

			float shadowFadeDistance =
				UnityComputeShadowFadeDistance(worldPos, viewZ);
			float shadowFade = UnityComputeShadowFade(shadowFadeDistance);
			shadowAttenuation = saturate(shadowAttenuation + shadowFade);
		#endif
	#else
		light.dir = 1;
	#endif

	light.color = _LightColor.rgb * (attenuation * shadowAttenuation);
	return light;
}

Although the shadow fading works based on the directional shadow map, the shadows of the other light types are faded too. This ensures that all shadows fade the same way, instead of only some shadows. Thus, the shadow fading code applies to all lights, as long as there are shadows. So let's move that code outside of the light-specific block.
尽管阴影褪色是基于方向性阴影贴图实现的,但其他类型光源的阴影也会褪色。这确保了所有阴影都以相同的方式褪色,而不是只有部分阴影。因此,只要存在阴影,阴影褪色代码就适用于所有光源。所以我们把这部分代码移到特定光源代码块的外部。

We can use a boolean to control whether the shadow-fading code is used. As the boolean is as a constant value, the code will be eliminated if it remains false.
我们可以使用一个布尔值来控制是否使用阴影渐隐代码。由于该布尔值是一个常量,如果它保持为 false,则该代码将被消除。

UnityLight CreateLight (float2 uv, float3 worldPos, float viewZ) {
	UnityLight light;
	float attenuation = 1;
	float shadowAttenuation = 1;
	bool shadowed = false;

	#if defined(DIRECTIONAL) || defined(DIRECTIONAL_COOKIE)
#if defined(SHADOWS_SCREEN)
			shadowed = true;
			shadowAttenuation = tex2D(_ShadowMapTexture, uv).r;

//			float shadowFadeDistance =
//				UnityComputeShadowFadeDistance(worldPos, viewZ);
//			float shadowFade = UnityComputeShadowFade(shadowFadeDistance);
//			shadowAttenuation = saturate(shadowAttenuation + shadowFade);
		#endif
	#else
		light.dir = 1;
	#endif

	if (shadowed) {
		float shadowFadeDistance =
			UnityComputeShadowFadeDistance(worldPos, viewZ);
		float shadowFade = UnityComputeShadowFade(shadowFadeDistance);
		shadowAttenuation = saturate(shadowAttenuation + shadowFade);
	}

	light.color = _LightColor.rgb * (attenuation * shadowAttenuation);
	return light;
}

Lights that aren't directional have a position. It is made available via _LightPos.
非定向光源具有位置,可通过 _LightPos 获取。

float4 _LightColor, _LightDir, _LightPos;

Now we can determine the light vector and light direction for spotlights.
现在我们可以确定聚光灯的光照矢量和光照方向。

	#else
		float3 lightVec = _LightPos.xyz - worldPos;
		light.dir = normalize(lightVec);
	#endif

World Position Again  再次审视世界坐标

The light direction doesn't appear to be correct, the result is black. This happens because the world position is computed incorrectly for spotlights. As we're rendering a pyramid somewhere in the scene, we don't have a convenient full-screen quad with rays stored in the normal channel. Instead, MyVertexProgram has to derive the rays from the vertex positions. This is done by converting the points to view space, for which we can use the UnityObjectToViewPos function.
光照方向似乎不正确,结果是黑色的。出现这种情况是因为聚光灯的世界坐标计算不正确。由于我们正在场景中的某个位置渲染一个金字塔,我们没有一个方便的、法线通道中存储了光线的全屏四边形。相反, MyVertexProgram 必须从顶点位置推导出光线。这是通过将点转换为观察空间来完成的,为此我们可以使用 UnityObjectToViewPos 函数。

	i.ray = UnityObjectToViewPos(v.vertex);

However, this produces rays with the wrong orientation. We have to negate their X and Y coordinates.
然而,这会产生方向错误的光线。我们必须将它们的 X 和 Y 坐标取反。

	i.ray = UnityObjectToViewPos(v.vertex) * float3(-1, -1, 1);
Correct world position.  正确的世界坐标。

This alternative approach works when light geometry is rendered in the scene. When a full-screen quad is used, we should just use the vertex normals. Unity tells us which case we're dealing with via the _LightAsQuad variable.
这种替代方法适用于场景中渲染光几何体的情况。当使用全屏四边形时,我们应该只使用顶点法线。Unity 通过 _LightAsQuad 变量告诉我们正在处理哪种情况。

float _LightAsQuad;

If it's set to 1, we're dealing with a quad and can use the normals. Otherwise, we have to use UnityObjectToViewPos.
如果设置为 1,我们处理的是一个四边形,可以使用法线。否则,我们必须使用 UnityObjectToViewPos

	i.ray = lerp(
		UnityObjectToViewPos(v.vertex) * float3(-1, -1, 1),
		v.normal,
		_LightAsQuad
	);

Cookie Attenuation  Cookie 衰减

The spotlight's conic attenuation is created via a cookie texture, whether it's the default circle or a custom cookie. We can begin by copying the cookie code of the directional light.
聚光灯的锥形衰减是通过 cookie 纹理创建的,无论是默认的圆形还是自定义 cookie。我们可以从复制定向光的 cookie 代码开始。

		float3 lightVec = _LightPos.xyz - worldPos;
		light.dir = normalize(lightVec);

		float2 uvCookie = mul(unity_WorldToLight, float4(worldPos, 1)).xy;
		attenuation *= tex2Dbias(_LightTexture0, float4(uvCookie, 0, -8)).w;

However, spotlight cookies get larger the further away from the light's position you go. This is done with a perspective transformation. So the matrix multiplication produces 4D homogeneous coordinates. To end up with regular 2D coordinates, we have to divide X and Y by W.
然而,聚光灯 cookie 会随着距离光源位置越远而变得越大。这是通过透视变换完成的。因此,矩阵乘法会产生 4D 齐次坐标。为了得到常规的 2D 坐标,我们必须将 X 和 Y 除以 W。

		float4 uvCookie = mul(unity_WorldToLight, float4(worldPos, 1));
		uvCookie.xy /= uvCookie.w;
		attenuation *= tex2Dbias(_LightTexture0, float4(uvCookie.xy, 0, -8)).w;
Cookie attenuation.  Cookie 衰减。

This actually results in two light cones, one forward and one backward. The backward cone usually ends up outside of the rendered area, but this is not guaranteed. We only want the forward cone, which corresponds with a negative W coordinate.
这实际上会产生两个光锥,一个向前,一个向后。向后的光锥通常最终会位于渲染区域之外,但这并非必然。我们只想要向前的光锥,它对应于负 W 坐标。

		attenuation *= tex2Dbias(_LightTexture0, float4(uvCookie.xy, 0, -8)).w;
		attenuation *= uvCookie.w < 0;

Distance Attenuation  距离衰减

The light from a spotlight also attenuates based on distance. This attenuation is stored in a lookup texture, which is made available via _LightTextureB0.
聚光灯的光线也会根据距离衰减。这种衰减存储在查找纹理中,该纹理通过 _LightTextureB0 提供。

sampler2D _LightTexture0, _LightTextureB0;

The texture is designed so it has to be sampled with the squared light distance, scaled by the light's range. The range is stored in the fourth component of _LightPos. Which of the texture's channels should be used varies per platform and is defined by the UNITY_ATTEN_CHANNEL macro.
该纹理设计为必须使用光线距离的平方进行采样,并按光线范围缩放。该范围存储在 _LightPos 的第四个分量中。应使用纹理的哪个通道因平台而异,并由 UNITY_ATTEN_CHANNEL 宏定义。

		light.dir = normalize(lightVec);

		attenuation *= tex2D(
			_LightTextureB0,
			(dot(lightVec, lightVec) * _LightPos.w).rr
		).UNITY_ATTEN_CHANNEL;

		float4 uvCookie = mul(unity_WorldToLight, float4(worldPos, 1));
Cookie and distance attenuation.
Cookie 和距离衰减。

Shadows  阴影

When the spotlight has shadows, the SHADOWS_DEPTH keyword is defined.
当聚光灯有阴影时,会定义 SHADOWS_DEPTH 关键字。

		float4 uvCookie = mul(unity_WorldToLight, float4(worldPos, 1));
		uvCookie.xy /= uvCookie.w;
		attenuation *= tex2Dbias(_LightTexture0, float4(uvCookie.xy, 0, -8)).w;

		#if defined(SHADOWS_DEPTH)
			shadowed = true;
		#endif

Spotlights and directional lights use the same variable to sample their shadow map. In the case of spotlights, we can use UnitySampleShadowmap to take care of the details of sampling hard or soft shadows. We have to supply it with the fragment position in shadow space. The first matrix in the unity_WorldToShadow array can be used to convert from world to shadow space.
聚光灯和方向光使用相同的变量来采样它们的阴影贴图。对于聚光灯,我们可以使用 UnitySampleShadowmap 来处理硬阴影或软阴影的采样细节。我们必须向它提供阴影空间中的片段位置。 unity_WorldToShadow 数组中的第一个矩阵可以用于从世界空间转换为阴影空间。

			shadowed = true;
			shadowAttenuation = UnitySampleShadowmap(
				mul(unity_WorldToShadow[0], float4(worldPos, 1))
			);
Spotlight with shadows.  带阴影的聚光灯。
unitypackage

Point Lights  点光源

Points lights use the same light vector, direction, and distance attenuation as spotlights. So they can share that code. The rest of the spotlight code should only be used when the SPOT keyword is defined.
点光源使用和聚光灯相同的光向量、方向和距离衰减。所以它们可以共享那部分代码。聚光灯的其余代码只应在定义了 SPOT 关键字时使用。

	#if defined(DIRECTIONAL) || defined(DIRECTIONAL_COOKIE)
#else
		float3 lightVec = _LightPos.xyz - worldPos;
		light.dir = normalize(lightVec);

		attenuation *= tex2D(
			_LightTextureB0,
			(dot(lightVec, lightVec) * _LightPos.w).rr
		).UNITY_ATTEN_CHANNEL;

		#if defined(SPOT)
			float4 uvCookie = mul(unity_WorldToLight, float4(worldPos, 1));
			uvCookie.xy /= uvCookie.w;
			attenuation *=
				tex2Dbias(_LightTexture0, float4(uvCookie.xy, 0, -8)).w;
			attenuation *= uvCookie.w < 0;

			#if defined(SHADOWS_DEPTH)
				shadowed = true;
				shadowAttenuation = UnitySampleShadowmap(
					mul(unity_WorldToShadow[0], float4(worldPos, 1))
				);
			#endif
		#endif
	#endif

This is already enough to get point lights working. They are rendered the same as spotlights, except that an icosphere is used instead of a pyramid.
这足以让点光源工作。它们与聚光灯的渲染方式相同,只是使用二十面体而不是金字塔。

High-intensity point light.
高强度点光源。

Shadows  阴影

The shadows of point lights are stored in a cube map. UnitySampleShadowmap takes care of the sampling for us. In this case, we have to provide it with a vector going from light to surface, to sample the cube map. This is the opposite of the light vector.
点光源的阴影存储在立方体贴图中。 UnitySampleShadowmap 为我们处理采样。在这种情况下,我们必须提供一个从光源到表面的向量,以对立方体贴图进行采样。这与光线向量的方向相反。

		#if defined(SPOT)
#else
			#if defined(SHADOWS_CUBE)
				shadowed = true;
				shadowAttenuation = UnitySampleShadowmap(-lightVec);
			#endif
		#endif
Point light with shadows.
带阴影的点光源。

Cookies  Cookie

Point light cookies are also made available via _LightTexture0. However, in this case we need a cube map instead of a regular texture.
点光源 Cookie 也可通过 _LightTexture0 获取。然而,在这种情况下,我们需要一个立方体贴图而不是普通纹理。

//sampler2D _LightTexture0, _LightTextureB0;
#if defined(POINT_COOKIE)
	samplerCUBE _LightTexture0;
#else
	sampler2D _LightTexture0;
#endif

sampler2D _LightTextureB0;
float4x4 unity_WorldToLight;

To sample the cookie, convert the fragment's world position to light space and use that to sample the cube map.
要采样 Cookie,请将片段的世界位置转换为光源空间,并使用该位置采样立方体贴图。

		#else
			#if defined(POINT_COOKIE)
				float3 uvCookie =
					mul(unity_WorldToLight, float4(worldPos, 1)).xyz;
				attenuation *=
					texCUBEbias(_LightTexture0, float4(uvCookie, -8)).w;
			#endif
			
			#if defined(SHADOWS_CUBE)
				shadowed = true;
				shadowAttenuation = UnitySampleShadowmap(-lightVec);
			#endif
		#endif
Point light with cookie.
带烘焙贴图的点光源。

Skipping Shadows  跳过阴影

We are now able to render all dynamic lights with our own shader. While we don't pay much attention to optimizations at this point, there is one potentially large optimization worth considering.
现在,我们能够使用自己的着色器渲染所有动态光源。虽然目前我们不太注重优化,但有一个潜在的巨大优化值得考虑。

Fragments that end up beyond the shadow fade distance won't be shadowed. However, we're still sampling their shadows, which can be expensive. We can avoid this by branching based on the shadow fade factor. It it approaches 1, then we can skip the shadow attenuation completely.
超出阴影淡出距离的片元将不会被投射阴影。然而,我们仍然在对其阴影进行采样,这可能会很昂贵。我们可以通过根据阴影淡出因子进行分支来避免这种情况。如果它接近 1,那么我们可以完全跳过阴影衰减。

	if (shadowed) {
		float shadowFadeDistance =
			UnityComputeShadowFadeDistance(worldPos, viewZ);
		float shadowFade = UnityComputeShadowFade(shadowFadeDistance);
		shadowAttenuation = saturate(shadowAttenuation + shadowFade);

		UNITY_BRANCH
		if (shadowFade > 0.99) {
			shadowAttenuation = 1;
		}
	}

However, branches are potentially expensive themselves. It's only an improvement because this is a coherent branch. Except near the edge of the shadow region, all fragments either fall inside or outside of it. But this only matters if the GPU can take advantage of this. HLSLSupport defines the UNITY_FAST_COHERENT_DYNAMIC_BRANCHING macro when this should be the case.
然而,分支本身可能开销不菲。这之所以是一种改进,是因为它是一个连贯的分支。除了阴影区域的边缘附近,所有片段要么在阴影内部,要么在阴影外部。但这只有在 GPU 能够利用此特性时才有意义。当满足此条件时, HLSLSupport 会定义 UNITY_FAST_COHERENT_DYNAMIC_BRANCHING 宏。

		#if defined(UNITY_FAST_COHERENT_DYNAMIC_BRANCHING)
			UNITY_BRANCH
			if (shadowFade > 0.99) {
				shadowAttenuation = 1;
			}
		#endif

Even then, it is only really worth it when the shadows require multiple texture samples. This is the case for soft spotlight and point light shadows, which is indicated with the SHADOWS_SOFT keyword. Directional shadows always require a single texture sample, so that's cheap.
即便如此,只有当阴影需要多次纹理采样时,它才真正值得。对于柔和的聚光灯和点光源阴影,情况就是如此,这用 SHADOWS_SOFT 关键字表示。定向阴影总是只需要一次纹理采样,因此开销很小。

		#if defined(UNITY_FAST_COHERENT_DYNAMIC_BRANCHING) && defined(SHADOWS_SOFT)
			UNITY_BRANCH
			if (shadowFade > 0.99) {
				shadowAttenuation = 1;
			}
		#endif

The next tutorial is Static Lighting.
下一个教程是静态照明。

unitypackage PDF
switch theme  切换主题
contents  内容
  1. Light Shader
    1. Using a Custom Shader
    2. A Second Pass
    3. Avoiding the Sky
    4. Converting Colors
  2. Directional Lights
    1. G-Buffer UV Coordinates
    2. World Position
    3. Reading G-Buffer Data
    4. Computing BRDF
    5. Configuring the Light
    6. Shadows
    7. Fading Shadows
    8. Light Cookies
    9. Supporting LDR
  3. Spotlights
    1. Drawing a Pyramid
    2. Supporting Multiple Light Types
    3. World Position Again
    4. Cookie Attenuation
    5. Distance Attenuation
    6. Shadows
  4. Point Lights
    1. Shadows
    2. Cookies
    3. Skipping Shadows