Version: 2022.2
Language : English
Using Physical Cameras
Camera Tricks

Cameras and depth textures

A CameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary
can generate a depth, depth+normals, or motion vector texture. This is a minimalistic G-buffer texture that can be used for post-processingA process that improves product visuals by applying filters and effects before the image appears on screen. You can use post-processing effects to simulate physical camera and film properties, for example Bloom and Depth of Field. More info post processing, postprocessing, postprocess
See in Glossary
effects or to implement custom lighting models.

These are mostly used by effects; for example, post-processing effects often use depth information.

Pixel values in the depth texture range between 0 and 1, with a non-linear distribution. Precision is usually 32 or 16 bits, depending on configuration and platform used. When reading from the Depth Texture, a high precision value in a range between 0 and 1 is returned. If you need to get distance from the Camera, or an otherwise linear 0–1 value, compute that manually using helper macros (see below).

Depth Textures are supported on most modern hardware and graphics APIs. Special requirements are listed below:

  • Direct3D 11+ (Windows), OpenGL 3+ (Mac/Linux), OpenGL ES 3.0+ (iOS), Metal (iOS), and popular consoles support depth textures.
  • OpenGL ES 2.0 (Android) requires GL_OES_depth_texture extension to be present.
  • WebGLA JavaScript API that renders 2D and 3D graphics in a web browser. The Unity WebGL build option allows Unity to publish content as JavaScript programs which use HTML5 technologies and the WebGL rendering API to run Unity content in a web browser. More info
    See in Glossary
    requires WEBGL_depth_texture extension.

The Camera’s depth Texture mode can be enabled using Camera.depthTextureMode variable from script. It is also possible to build similar textures yourself, using Shader Replacement feature.

There are three possible depth texture modes:

  • DepthTextureMode.Depth: a depth texture.
  • DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.*
  • DepthTextureMode.MotionVectors: per-pixel screen space motion of each screen texel for the current frame. Packed into a RG16 texture.

These are flags, so it is possible to specify any combination of the above textures.

DepthTextureMode.Depth texture

This builds a screen-sized depth texture.

Depth texture is rendered using the same shaderA program that runs on the GPU. More info
See in Glossary
passes as used for shadow caster rendering (ShadowCaster pass type). So by extension, if a shader does not support shadow casting (i.e. there’s no shadow caster pass in the shader or any of the fallbacks), then objects using that shader will not show up in the depth texture.

  • Make your shader fallback to some other shader that has a shadow casting pass, or
  • If you’re using surface shadersA streamlined way of writing shaders for the Built-in Render Pipeline. More info
    See in Glossary
    , adding an addshadow directive will make them generate a shadow pass too.

Note that only “opaque” objects (that which have their materials and shaders setup to use render queue <= 2500) are rendered into the depth texture.

DepthTextureMode.DepthNormals texture

This builds a screen-sized 32 bit (8 bit/channel) texture, where view space normals are encoded into R&G channels, and depth is encoded in B&A channels. Normals are encoded using Stereographic projection, and depth is 16 bit value packed into two 8 bit channels.

UnityCG.cginc include file has a helper function DecodeDepthNormal to decode depth and normal from the encoded pixelThe smallest unit in a computer image. Pixel size depends on your screen resolution. Pixel lighting is calculated at every screen pixel. More info
See in Glossary
value. Returned depth is in 0..1 range.

For examples on how to use the depth and normals texture, please refer to Replacing shaders at runtime or Ambient OcclusionA method to approximate how much ambient light (light not coming from a specific direction) can hit a point on a surface.
See in Glossary
in Post-processing and full-screen effects.

DepthTextureMode.MotionVectors texture

This builds a screen-sized RG16 (16-bit float/channel) texture, where screen space pixel motion is encoded into the R&G channels. The pixel motion is encoded in screen UV space.

When sampling from this texture motion from the encoded pixel is returned in a range of –1..1. This will be the UV offset from the last frame to the current frame.

Tips & Tricks

Camera inspector indicates when a camera is rendering a depth or a depth+normals texture.

The way that depth textures are requested from the Camera (Camera.depthTextureMode) might mean that after you disable an effect that needed them, the Camera might still continue rendering them. If there are multiple effects present on a Camera, where each of them needs the depth texture, there’s no good way to automatically disable depth texture rendering if you disable the individual effects.

When implementing complex Shaders or Image Effects, keep Rendering Differences Between Platforms in mind. In particular, using depth texture in an Image Effect often needs special handling on Direct3D + Anti-Aliasing.

In some cases, the depth texture might come directly from the native Z buffer. If you see artifacts in your depth texture, make sure that the shaders that use it do not write into the Z buffer (use ZWrite Off).

Shader variables

Depth textures are available for sampling in shaders as global shader properties. By declaring a sampler called _CameraDepthTexture you will be able to sample the main depth texture for the camera.

_CameraDepthTexture always refers to the camera’s primary depth texture. By contrast, you can use _LastCameraDepthTexture to refer to the last depth texture rendered by any camera. This could be useful for example if you render a half-resolution depth texture in script using a secondary camera and want to make it available to a post-process shader.

The motion vectors texture (when enabled) is available in Shaders as a global Shader property. By declaring a sampler called ‘_CameraMotionVectorsTexture’ you can sample the Texture for the curently rendering Camera.

Under the hood

Depth textures can come directly from the actual depth bufferA memory store that holds the z-value depth of each pixel in an image, where the z-value is the depth for each rendered pixel from the projection plane. More info
See in Glossary
, or be rendered in a separate pass, depending on the rendering path used and the hardware. Typically when using Deferred Shading rendering pathThe technique that a render pipeline uses to render graphics. Choosing a different rendering path affects how lighting and shading are calculated. Some rendering paths are more suited to different platforms and hardware than others. More info
See in Glossary
, the depth textures come “for free” since they are a product of the G-buffer rendering anyway.

When the DepthNormals texture is rendered in a separate pass, this is done through Shader Replacement. Hence it is important to have correct “RenderType” tag in your shaders.

When enabled, the MotionVectors texture always comes from a extra render pass. Unity will render moving GameObjectsThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary
into this buffer, and construct their motion from the last frame to the current frame.

Using Physical Cameras
Camera Tricks
Copyright © 2023 Unity Technologies
优美缔软件(上海)有限公司 版权所有
"Unity"、Unity 徽标及其他 Unity 商标是 Unity Technologies 或其附属机构在美国及其他地区的商标或注册商标。其他名称或品牌是其各自所有者的商标。
公安部备案号:
31010902002961