Legacy Documentation: Version 5.3
Camera's Depth Texture
Shader Level of Detail

Platform-specific rendering differences

Unity runs on various platforms, and in some cases there are differences in how things behave. Most of the time Unity hides the differences, but sometimes issues can occur.

Render Texture coordinates

Vertical Texture coordinate conventions differ between Direct3D-like and OpenGL-like platforms:

  • In Direct3D, Metal and consoles, the coordinate is 0 at the top, and increases downwards.
  • In OpenGL and OpenGL ES, the coordinate is 0 at the bottom, and increases upwards.

Most of the time this does not really matter, except when rendering into a Render Texture. In this case, Unity internally flips rendering upside down when rendering into a Texture on non-OpenGL, so that the conventions match between the platforms. Two common cases where this needs to be handled in the Shaders are Image Effects and rendering in UV space.

Image Effects, upside down RenderTextures, and MSAA

One case where upside down rendering into a Texture does not happen is when Image Effects and anti-aliasing are used. In this case, Unity renders to the screen to get anti-aliasing, and then resolves rendering into a RenderTexture for further processing with an Image Effect. The resulting source Texture for an Image Effect is no* flipped upside down on Direct3D/Metal (unlike all other Render Textures).

If your Image Effect is a simple one (processing one texture at a time) then this does not really matter, because Graphics.Blit takes care of that.

However, if you’re processing more than one RenderTexture together in your Image Effect, they are likely to come out at different vertical orientations (only in Direct3D-like platforms, and only when anti-aliasing is used). You need to manually “flip” the screen Texture upside down in your Vertex Shader, like this:

// On non-GL when AA is used, the main Texture and scene depth Texture
// will come out in different vertical orientations.
// So flip sampling of the Texture when that is the case (main Texture
// texel size will have negative Y).

#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
        uv.y = 1-uv.y;
#endif

Refer to the Edge Detection Scene in the Shader Replacement sample project for an example of this. Edge detection there uses both the screen Texture and the Camera’s Depth+Normals texture.

A similar situation occurs when using GrabPass. The resulting render Texture might not actually be turned upside down on non-OpenGL platforms. Typically, Shader code that samples GrabPass Textures should use the ComputeGrabScreenPos function from the UnityCG include file.

Rendering in UV space

When rendering in Texture coordinate (UV) space for special effects or tools, you might need to adjust your Shaders so that the rendering is consistent between D3D-like and OpenGL-like systems, and between rendering into the screen vs. rendering into a Texture. The built-in variable _ProjectionParams.x contains a +1 or –1 value which indicates whether projection has been flipped upside down or not. You can check this value in your Shaders if you need to do different things based on this.

float4 vert(float2 uv : TEXCOORD0) : SV_POSITION
{
    float4 pos;
    pos.xy = uv;
    // we're rendering with upside-down flipped projection,
    // so flip the vertical UV coordinate too
    if (_ProjectionParams.x < 0)
        pos.y = 1 - pos.y;
    pos.z = 0;
    pos.w = 1;
    return pos;
}

Clip space coordinate differences

Just like with Texture coordinates, the clip space coordinates (also known as post-projection space coordinates) differ between Direct3D-like and OpenGL-like platforms:

  • In Direct3D, Metal and consoles, the clip space depth goes from 0.0 at the near plane, to +1.0 at the far plane.
  • In OpenGL and OpenGL ES, the clip space depth goes from –1.0 at the near plane, to +1.0 at the far plane.

Inside Shader code, you can use UNITY_NEAR_CLIP_VALUE macro to get the near plane value based on platform.

Inside script code, GL.GetGPUProjectionMatrix can be used to convert from Unity’s coordinate system (which follows OpenGL conventions) to what the platform expects.

Precision of Shader computations

PC GPUs treat all floating point types (float, half and fixed) as the same, and do all calculations using full 32 bit precision. Many mobile GPUs do not do this, so make sure to test your Shaders on the target platform to avoid precision issues. See the data types and precision page for details.

Similarly, all samplers/textures declared in shader code default to “low precision”. This does not matter for PC GPUs, but on some mobile GPUs you might want to use sampler2D_half (declares half-precision texture), sampler2D_float (full precision texture) etc.

Const declarations in Shaders

In HLSL, const has much the same meaning as it does in C# and C++, in that the variable declared is read-only within its scope, but can be initialised in any way.

In GLSL, however, const means that the variable is effectively a compile time constant, and so must be initialised with compile time constants (either literal values, or calculations on other constants).

It is best to follow the GLSL semantics and only declare a variable as const when it is truly invariant, and avoid initialising a const variable with some other mutable values (for example, as a local variable in a function). This works in HLSL, and avoids confusing errors on only some platforms.

Const declarations in Shaders

In HLSL, const has much the same meaning as it does in C# and C++, such that the variable declared is read-only within its scope, but can be initialised in any way.

In GLSL, however, const means that the variable is effectively a compile time constant, and so must be initialised with compile time constants - either literal values, or calculations on other constants.

It is best to follow the GLSL semantics and only declare a variable as const when it is truly invariant, and avoid initialising a const variable with some other mutable values (for example, as a local variable in a function). This works in HLSL, and avoids confusing errors on only some platforms.

Semantics used by Shaders

To get Shaders working on all platforms, some special Shader values should use these semantics:

  • Vertex Shader output (clip space) position: SV_POSITION. Sometimes Shader use POSITION semantics for that, but this does not not work on Sony PS4 or when tessellation is used.
  • Fragment Shader output color: SV_Target. Sometimes Shader use COLOR or COLOR0 for that, but again this does not work on PS4.

When rendering meshes as Points, make sure to output PSIZE semantics from the vertex Shader (for example, set it to 1). Some platforms, such as OpenGL ES or Metal, treat point size as “undefined” when it’s not written to from the Shader.

See the Shader semantics page for more details.

Direct3D 9 / 11 Shader compiler is stricter about syntax

Direct3D platforms use Microsoft’s HLSL Shader compiler. The HLSL compiler is stricter than other compilers about various subtle Shader errors. For example, it doesn’t accept function output values that aren’t initialized properly.

The most common places you might run into this are:

  • A Surface Shader vertex modifier that has an “out” parameter. Make sure to initialize the output like this:
      void vert (inout appdata_full v, out Input o) 
      {
        **UNITY_INITIALIZE_OUTPUT(Input,o);**
        // ...
      }
  • Partially initialized values. For example, a function returns float4, but the code only sets the .xyz values of it. Make sure to set all values, or change to float3 if you only need three values.
  • Using tex2D in the Vertex Shader. This is not valid, as UV derivatives don’t exist in the Vertex Shader. You need to sample an explicit mip level instead; for example, use tex2Dlod (tex, float4(uv,0,0)). You also need to add #pragma target 3.0, as tex2Dlod is a Shader model 3.0 feature.

DirectX 11 HLSL syntax and Surface Shaders

Some parts of the Surface Shader compilation pipeline do not understand DX11-specific HLSL syntax. If you’re using HLSL features like StructuredBuffers, RWTextures and other non-DX9 syntax, wrap them in a DX11-only preprocessor macro:

#ifdef SHADER_API_D3D11
// DX11-specific code, for example
StructuredBuffer<float4> myColors;
RWTexture2D<float4> myRandomWriteTexture;
#endif

Using Shader framebuffer fetch

Some GPUs (most notably PowerVR based ones on iOS) allow you to do a form of programmable blending by providing current fragment color as input to the Fragment Shader (see EXT_shader_framebuffer_fetch).

It is possible to write Shaders in Unity that use the framebuffer fetch functionality. When writing HLSL/Cg Fragment Shader, simply use the inout color argument in it. For example:

CGPROGRAM
// only compile Shader for platforms that can potentially
// do it (currently gles,gles3,metal)
#pragma only_renderers framebufferfetch

void frag (v2f i, inout half4 ocol : SV_Target)
{
    // ocol can be read (current framebuffer color)
    // and written into (will change color to that one)
    // ...
}   
ENDCG
Camera's Depth Texture
Shader Level of Detail
Copyright © 2023 Unity Technologies
优美缔软件(上海)有限公司 版权所有
"Unity"、Unity 徽标及其他 Unity 商标是 Unity Technologies 或其附属机构在美国及其他地区的商标或注册商标。其他名称或品牌是其各自所有者的商标。
公安部备案号:
31010902002961