13.1原理:
-
深度纹理,存储的不是颜色值,而是一个高精度的深度值[0,1] - 归一化的设备坐标(Normalized Device Coordinates,NDC) 精度(24or16位)
-
1)使用延迟渲染时,G-buffer中直接可得。2)否则通过单独的pass,unity使用ShaderReplacement技术选择那些RenderType为Opaque的物体,判断他们的渲染队列是否小于等于2500. 如果满足,就渲染到深度和法线纹理中。
-
获取深度纹理的方式: camera.depthTextureMode = DepthTextrueMode.DepthNormals
-
世界坐标的获取过程:
1 采样深度纹理得 深度值 [0,1]范围
2 映射到NDC坐标
3 VP矩阵逆转换,再除以w。得到世界坐标
(MVP过程的,P-投影矩阵<观察空间 to 裁剪空间>)
13.2再谈运动模糊
SAMPLE_DEPTH_TEXTURE SAMPLE_DEPTH_TEXTURE_PROJ SAMPLE_DEPTH_TEXTURE_LOD
纹理采样得到深度纹理的深度值往往是非线性的,需要转换到视角空间的深度值[0,1]范围 。 (非线性的原因:透视投影使用的裁剪矩阵)
- Unity封装了转换过程--LinearEyeDepth和Linear01Depth --Zview,Z01 (转到视角空间,转到0-1范围)
- Unity函数DecodeDepthNormal --得到深度值和法线方向。(深度:[0,1]范围的线性深度值, 法线:视角空间)
1 采样深度纹理得 深度值
2 映射到NDC坐标
3 VP矩阵逆转换,再除以w。得到世界坐标
4 上一帧和当前帧算出速度
// Get the depth buffer value at this pixel.
float d = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv_depth);
// H is the viewport position at this pixel in the range -1 to 1.
float4 H = float4(i.uv.x * 2 - 1, i.uv.y * 2 - 1, d * 2 - 1, 1);
// Transform by the view-projection inverse.
float4 D = mul(_CurrentViewProjectionInverseMatrix, H);
// Divide by w to get the world position.
float4 worldPos = D / D.w; //为什么不是 D * D.w
// Current viewport position
float4 currentPos = H;
// Use the world position, and transform by the previous view-projection matrix.
float4 previousPos = mul(_PreviousViewProjectionMatrix, worldPos);
// Convert to nonhomogeneous points [-1,1] by dividing by w.
previousPos /= previousPos.w;
// Use this frame's position and last frame's to compute the pixel velocity.
float2 velocity = (currentPos.xy - previousPos.xy)/2.0f;
float2 uv = i.uv;
float4 c = tex2D(_MainTex, uv);
uv += velocity * _BlurSize;
for (int it = 1; it < 3; it++, uv += velocity * _BlurSize) {
float4 currentColor = tex2D(_MainTex, uv);
c += currentColor;
}
c /= 3;
return fixed4(c.rgb, 1.0);
第二个pass—后处理的标配。zwrite off。OnRenderImage
是在不透明pass执行后调用,如果不关闭 影响后面透明pass
ZTest Always Cull Off
ZWrite Off
interpolateRay 有像素到相机的方向和距离信息
13.4再谈边缘检测
1原理
1)相机的深度纹理模式设置为,深度法线模式
2)边缘检测卷积公式用Roberts,如下图Image1-1。
3)计算法线和深度是否足够相似。关键代码如下Code1-1:
问题:CheckSame函数里 diffDepth < 0.1 * centerDepth的centerDepth是必须要乘的吗?为什么?
Image1-1
Code1-1:
v2f vert(appdata_img v) {
o.uv[1] = uv + _MainTex_TexelSize.xy * half2(1,1) * _SampleDistance;
o.uv[2] = uv + _MainTex_TexelSize.xy * half2(-1,-1) * _SampleDistance;
o.uv[3] = uv + _MainTex_TexelSize.xy * half2(-1,1) * _SampleDistance;
o.uv[4] = uv + _MainTex_TexelSize.xy * half2(1,-1) * _SampleDistance;
...
}
half CheckSame(half4 center, half4 sample) {
half2 centerNormal = center.xy;
float centerDepth = DecodeFloatRG(center.zw);
half2 sampleNormal = sample.xy;
float sampleDepth = DecodeFloatRG(sample.zw);
// difference in normals
// do not bother decoding normals - there's no need here
half2 diffNormal = abs(centerNormal - sampleNormal) * _Sensitivity.x;
int isSameNormal = (diffNormal.x + diffNormal.y) < 0.1;
// difference in depth
float diffDepth = abs(centerDepth - sampleDepth) * _Sensitivity.y;
// scale the required threshold by the distance
int isSameDepth = diffDepth < 0.1 * centerDepth;
// return:
// 1 - if normals and depth are similar enough
// 0 - otherwise
return isSameNormal * isSameDepth ? 1.0 : 0.0;
}
fixed4 fragRobertsCrossDepthAndNormal(v2f i) : SV_Target {
half4 sample1 = tex2D(_CameraDepthNormalsTexture, i.uv[1]);
half4 sample2 = tex2D(_CameraDepthNormalsTexture, i.uv[2]);
half4 sample3 = tex2D(_CameraDepthNormalsTexture, i.uv[3]);
half4 sample4 = tex2D(_CameraDepthNormalsTexture, i.uv[4]);
half edge = 1.0;
edge *= CheckSame(sample1, sample2);
edge *= CheckSame(sample3, sample4);
}