There are two stereo renderingThe process of drawing graphics to the screen (or to a render texture). By default, the main camera in Unity renders its view to the screen. More info
See in Glossary methods for Windows HolographicThe former name for Windows Mixed Reality. More info
See in Glossary devices (HoloLens); multi-pass and single-pass instanced.
Multi-pass rendering runs 2 complete render passes (one for each eye). This generates almost double the CPU workload compared to the single-pass instanced rendering method. However this method is the most backwards compatible and doesn’t require any shaderA small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. More info
See in Glossary changes.
Instanced rendering performs a single render pass where each draw call is replaced with an instanced draw call. This heavily decreases CPU utilization. Additionally this slightly decreases GPU utilization due to the cache coherency between the two draw calls. In turn your app’s power consumption will be much lower.
To enable this feature, open PlayerSettings (menu: Edit > Project Settings > Player). In PlayerSettings, navigate to Other Settings, check the Virtual Reality Supported checkbox, then select Single Pass Instanced (Fastest) from the Stereo Rendering Method dropdown.
Unity defaults to the slower Multi pass (Slow) setting as you may have custom shaders that do not have the required code in your scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary to support this feature.
Any non built-in shaders will need to be updated to work with instancing. Please read this documentation to see how this is done: GPU Instancing. Furthermore, you’ll need to make two additional changes in the last shader stage used before the fragment shaderThe “per-pixel” part of shader code, performed every pixel that an object occupies on-screen. The fragment shader part is usually used to calculate and output the color of each pixel. More info
See in Glossary (Vertex/Hull/Domain/Geometry). First, you will have to add
UNITY_VERTEX_OUTPUT_STEREO to the output struct. Second, you will need to add
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO() in the main function for that stage after
UNITY_SETUP_INSTANCE_ID() has been called.
You will need to add the
UNITY_DECLARE_SCREENSPACE_TEXTURE(tex) macro around the input texture declarations, so that 2D texture arrays will be properly declared. Next, you must add a call to
UNITY_SETUP_INSTANCE_ID() at the beginning of the fragment shader. Finally, you will need to use the
UNITY_SAMPLE_SCREENSPACE_TEXTURE() macro when sampling those textures. See HLSLSupport.cginc for more information on other similar macros depth textures and screen space shadow maps.
Here’s a simple example that applies all of the previously mentioned changes to the template image effect:
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
v2f vert (appdata v)
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv;
fixed4 frag (v2f i) : SV_Target
fixed4 col = UNITY_SAMPLE_SCREENSPACE_TEXTURE(_MainTex, i.uv);
// just invert the colors
col = 1 - col;
CommandBuffer.DrawProceduralIndirect() get all of their arguments from a compute buffer, so we can’t easily increase the instance count. Therefore you will have to manually double the instance count contained in your compute buffers.
See the Vertex and fragment shader examples page for more information on shader code.
Did you find this page useful? Please give it a rating: