Search Unity

We are proud to announce that in 2018.1 creators can now capture stereoscopic 360 images and video in Unity. Whether you’re a VR developer who wants to make a 360 trailer to show off your experience or a director who wants to make an engaging cinematic short film, Unity’s new capture technology empowers you to share your immersive experience with an audience of millions on platforms such as YouTube, Within, Jaunt, Facebook 360, or Steam 360 Video. Download the beta version of Unity 2018.1 today to start capturing.

How to use this Feature

Our device independent stereo 360 capture technique is based on Google’s Omni-directional Stereo (ODS) technology using stereo cubemap rendering. We support rendering to stereo cubemaps natively in Unity’s graphics pipeline on both Editor and on PC standalone player. After stereo cubemaps are generated, we can convert these cubemaps to stereo equirectangular maps which is a projection format used by 360 video players.

To capture a scene in Editor or standalone player is as simple as calling Camera.RenderToCubemap()  once per eye:

During capture of each eye, we enable a shader keyword which warps each vertex position in the scene according to a shader ODSOffset() function which does the per eye projection and offset.

Stereo 360 capture works in forward and deferred lighting pipelines, with screen space and cubemap shadows, skybox, MSAA, HDR and the new post processing stack. For more info, see our new stereo 360 capture API.

To convert cubemaps to stereo equirectangular maps, call RenderTexture.ConvertToEquirect() :

Using Unity frame recorder, a sequence of these equirect images can be captured out as frames of a stereo 360 video. This video can then be posted on video websites that support 360 playback, or can be used inside your app using Unity’s 360 video playback introduced in 2017.3.

For the PC standalone player, you need to enable the “360 Stereo Capture” option in your build (see below) so that Unity generates 360 capture enabled shader variants which are disabled by default in normal player builds.

In practice, most of 360 capture work can be done on the PC in Editor/Play mode.

For VR applications, we recommend disabling VR in Editor when capturing 360 stereo cubemaps (our stereo 360 capture method doesn’t require VR hardware). This will speedup performance without affecting the captured results.

Technical Notes on Stereo 360 Capture

For those of you using your own shaders or implementing your own shadowing algorithms, here are some additional notes to help you integrate with Unity stereo 360 capture.

We added an optional shader keyword: STEREO_CUBEMAP_RENDER. When enabled, this keyword will modify UnityObjectToClipPos()  to include the additional shader code to transform positions with ODSOffset()  function (see UnityShaderUtilities file in 2018.1). The keyword will also let engine setup the proper stereo 360 capture rendering.

If you are implementing screen space shadows, there is the additional issue that the shadow computation of reconstructed world space from depth map (which has post ODS Offset applied) and view ray is not the original world space position. This will affect shadow lookup in light space which expects the true world position. The view ray is also based on the original camera and not in ODS space.

One way to solve this is to render the scene to create a one-to-one mapping of world positions with screen space shadow map and write out the world positions (unmodified by ODS offset) into a float texture. This map is used as true world positions to lookup shadow from light space. You can also use 16-bit float texture if you know the scene fits within 16-bit float precision based on scene center and world bounds.

We’d love to see what you’re creating. Share links to your 360 videos on Unity Connect or tweet with #madewithunity. Also, remember this feature is experimental. Please give us your feedback and engage with us on our 2018.1 beta forum.

22 Comments

Subscribe to comments

leave_a_reply to David RodriguezClick here to cancel reply.

You may use these HTML tags and attributes: <a href=""> <b> <code> <pre>

  1. Mind that the second part of the code (https://docs.unity3d.com/2018.1/Documentation/ScriptReference/Camera.RenderToCubemap.html) :

    “// Attach this script to an object that uses a Reflective shader.
    // Realtime reflective cubemaps!

    @script ExecuteInEditMode
    .
    .
    .
    function OnDisable () {
    DestroyImmediate (cam);
    DestroyImmediate (rtex);
    }”

    it’s not translated to c#. The same code exists also for js. Technicality but anyway..

  2. Is it possible to have an example project?

  3. Has anyone figured out how to save out the actual cubemap? When I try to save the raw cubemap to file, I only ever get a single cubemap side, but the equirect Version still turns out fine so the data should be hidden somehwere…?

    And is there any way to make the recording take the camera rotation into account?
    Rotating the entire scene around the camera would be a lot of effort and a drain on performance, and rotating the sphere that the video will play on would mean that I’d sometimes have the pole distortions right in the center of my view.

  4. I quickly built a sample out of this blog entry. Please note that the render textures for each eye have to have the dimension Cube, the equirect is a simple 2D render texture. To view the result either click on a render texture and view it in editor, or save it somewhere. :)

    using UnityEngine;

    public class RenderCubeMap : MonoBehaviour
    {
    public RenderTexture cubemapLeftEye;
    public RenderTexture cubemapRightEye;
    public RenderTexture equirect;
    public bool renderStereo = true;
    public float stereoSeparation = 0.064f;

    void LateUpdate()
    {
    Camera cam = GetComponent();

    if (cam == null)
    {
    cam = GetComponentInParent();
    }

    if (cam == null)
    {
    Debug.Log(“stereo 360 capture node has no camera or parent camera”);
    }

    if (renderStereo)
    {
    cam.stereoSeparation = stereoSeparation;
    cam.RenderToCubemap(cubemapLeftEye, 63, Camera.MonoOrStereoscopicEye.Left);
    cam.RenderToCubemap(cubemapRightEye, 63, Camera.MonoOrStereoscopicEye.Right);
    }
    else
    {
    cam.RenderToCubemap(cubemapLeftEye, 63, Camera.MonoOrStereoscopicEye.Mono);
    }

    //optional: convert cubemaps to equirect

    if (equirect == null)
    return;

    cubemapLeftEye.ConvertToEquirect(equirect, Camera.MonoOrStereoscopicEye.Left);
    cubemapRightEye.ConvertToEquirect(equirect, Camera.MonoOrStereoscopicEye.Right);
    }
    }

  5. Since it was not mentioned, I guess this cubemap rendering is the traditional way. Any chances of optionally RenderToCubemap() using Google’s Equi-Angular Cubemap (EAC) for some extra quality boost? We lose pixels on cubemap, then again on equirectangular.
    https://blog.google/products/google-vr/bringing-pixels-front-and-center-vr-video/

  6. I would like to capture a Geo scene, then reapply the capture as a 360 degree stereo skybox in VR (on Gear and Rift)

    Can we get a follow up explaining this process?

  7. We need more of a break down. To none coders this doesn’t make much sense… And where do these frames export to? Are we expected to know what kind of code to use to set up the file directory? Please make these breakdowns more friendly to the whole creative community, and not just developers.

    1. Here here. I am a coder, and I appreciate it when fairly spoon-fed exactly what it takes to do what they are saying.

      This code appears to create cube and equirect images per-frame. I would venture that ImageConversion.EncodeToPNG() would help, at least with stills. Save them numbered sequentially then assemble with FFMPEG? Is that the intent? Or expect updates from, say AVPro on the Asset store?

      1. Actually, I was able to save PNG with that and System.IO.File.WriteAllBytes, and to AVI with AVPro Movie Capture by adjusting the above script to :

        using UnityEngine;
        using System.IO;

        public class RenderCubeMap : MonoBehaviour
        {
        public RenderTexture cubemapLeftEye;
        public RenderTexture cubemapRightEye;
        public RenderTexture equirect;
        public bool renderStereo = true;
        public float stereoSeparation = 0.064f;
        public AVProMovieCaptureFromTexture _movieCapture;
        public bool captureToPNG;

        public Texture2D tempTex;

        private void Start()
        {
        tempTex = new Texture2D(equirect.width, equirect.height);
        if (_movieCapture)
        {
        _movieCapture.SetSourceTexture(tempTex);
        }
        }

        void LateUpdate()
        {
        Camera cam = GetComponent();

        if (cam == null)
        {
        cam = GetComponentInParent();
        }

        if (cam == null)
        {
        Debug.Log(“stereo 360 capture node has no camera or parent camera”);
        }

        if (renderStereo)
        {
        cam.stereoSeparation = stereoSeparation;
        cam.RenderToCubemap(cubemapLeftEye, 63, Camera.MonoOrStereoscopicEye.Left);
        cam.RenderToCubemap(cubemapRightEye, 63, Camera.MonoOrStereoscopicEye.Right);
        }
        else
        {
        cam.RenderToCubemap(cubemapLeftEye, 63, Camera.MonoOrStereoscopicEye.Mono);
        }

        //optional: convert cubemaps to equirect

        if (equirect == null)
        return;

        RenderTexture oldRT = RenderTexture.active;

        cubemapLeftEye.ConvertToEquirect(equirect, Camera.MonoOrStereoscopicEye.Right); // THIS MUST BE A UNITY BUG
        cubemapRightEye.ConvertToEquirect(equirect, Camera.MonoOrStereoscopicEye.Left);
        RenderTexture.active = equirect;
        tempTex.ReadPixels(new Rect(0, 0, equirect.width, equirect.height), 0, 0);
        tempTex.Apply();
        if (captureToPNG)
        {
        byte[] bytes = ImageConversion.EncodeToPNG(tempTex);
        File.WriteAllBytes(“/f” + Time.frameCount + “.png”, bytes);
        }
        RenderTexture.active = oldRT;
        }
        }

        1. Note, in that above script: I set the execution order to be sooner than the movie recording script, so that the texture would be set and ready for inclusion in the video.

          And note the line where I believe there is a BUG in Unity. I was expecting left=top but it was reversed. So I reversed it in script, and it turns out nice.

  8. You can use Google Cardboard/Daydream and Youtube app to view it in stereo or on GearVR, use Samsung Internet app from Oculus store. gmail login

  9. Do you have a full, working code example for this? The snippet included above is not much to go on.

  10. Viktor Phoenix (Headspace Studio)

    January 31, 2018 at 3:57 pm Reply

    Will it capture soundfield audio as well?

  11. Here’s an example stereo 360 capture video from our blog:
    https://youtu.be/K6uGXtPCjEw
    You can use Google Cardboard/Daydream and Youtube app to view it in stereo or on GearVR, use Samsung Internet app from Oculus store. Be sure to view with highest Quality setting.

    1. Can you provide an example project?

  12. any 360 stereo video renders available on youtube? check here for other 360 examples https://www.youtube.com/watch?v=Qh5K2z51r9U

  13. Is there an example video anywhere I can watch? Pre-rendered stereo 360 video sounds incredible.

  14. unitypluscryengine

    January 26, 2018 at 11:05 pm Reply

    Please add time of day and weather system , road tool , river tool , etc to unity 2018 or 2019, we are need to this features at unity , thanks a lot for your hard work , you are the best

    1. You know you can find all that and more on the Asset Store

  15. Wow this will be great for matte painting! Currently only the stuff I was raytracing implicitly could I make into these textures! this is going to be great for me!

  16. We publish content for planetariums. The way we’ve had to do it is a little hacky – by using 5 stitched fisheye cameras and outputting the video stream using Spout. This has a lot of limitations, like performance and needing a Spout-receiving application to pass the video to the display. Could this be used to enable one camera to output 180° live video directly to the display?