Search Unity

A fun game experience is something that players want to show off, record, and share. With VR, seeing what the player sees on a single, rectangular screen doesn’t always convey the entire feeling. This means that spectators can often find the default ‘seeing through a player’s POV’ experience underwhelming. What I wanted to do was set up a simple starter system for how a spectator camera should work and to add a little more fun for those not in the VR experience themselves.  Fortunately, there have been a few shipped examples that successfully designed a good spectator view. The goal of this project was to come up with a spectator system that builds on those designs, is compact and portable, and can easily be integrated into your own projects.

Source Code

You can download the associated project here.
Requires Unity version 2017.2 or later.

Creating a Basic Spectator Camera

The first thing I need to do is to create a second camera specifically for the Spectator. I create a second camera and place it facing my first, original camera. Then, in the Camera Settings, I need to set the Target Eye to None (Main Display).

Run the project in the editor, and already Unity’s game view is rendered independent of what the VR headset displays. It’s that easy! But don’t worry, there’s more fun we can have here.

Making a Player

If I point that spectator camera back at myself, and hit play, I can’t see anything! I need to create an avatar to represent me in the world. I managed to create a nice little head and hands model using Unity’s built-in shapes, and can now link them up as a head and hands. I want these to move with my tracked devices in the real world. To link these up, we have a new component in 2017.2: The Tracked Pose Driver. Drop it onto a gameobject, set whether you want to use the HMD or a Controller, and voila, that gameobject will be updated and can be used as an in-game proxy for any tracked part of your VR hardware. This makes it trivial to build a quick player VR rig.

Adding Camera Angles

My narcissistic itch satisfied, now I want to get a few more in-game angles. All I need is a few world locations, and a small script, called the Spectator Controller, to iterate over those locations. The core of this script keeps track of the transform that the camera is currently attached to. In our sample, we are tracking m_CurrentTransform. I want to be able to switch cameras both as a VR player, and as a spectator, so I’ve linked that up to both the touchpad/stick clicks on the VR controllers, and the spacebar on the keyboard. The second responsibility of this Spectator Controller is to enable and disable the color and viewfinders of the currently active camera. I’ll opt to create a CameraAttachPoint MonoBehaviour in order to handle the elements that are specific to my high tech camera and viewfinder.

In-Game Spectator Camera Preview

Next up, I want to be able to see what the Spectator sees, while still in VR. I won’t know if I’m `striking a good pose until I can see for myself, in real time. For this, I need a render target, and an extra camera. If I render my spectator’s camera to a render target, I can then redirect the output to both a texture in the world, and a camera directed towards the Main Display. This part just needs a few more assets, conveniently located in the Assets/RenderTarget folder. I also need a third camera. We now have 3 cameras: the VR camera, the spectator camera, and the spectator display, which takes the spectator camera’s render target and displays it to the user. I’ll opt to use a Canvas UI object here so that I could then add additional UI not visible to the VR player nor any spectator render targets.

Interacting with the Cameras

That’s fun, but now that I can see myself dance, I don’t just want to iterate over preset angles, I want to be able to set my own. I want to be able to grab that camera and really show myself off. For that, I need to build a small component called the Grabber. It’s a simple system: when I press the trigger, I check for any physics objects in a small radius that are on a specific layer. While the trigger is held, I continue to update the position and rotation of any found objects to match that of the grabbing hand. Simple, but it gets the job done.

An important note about moving the camera: getting the camera tossed around like a small ragdoll can be disorienting to our spectators. If you don’t have your inner ear helping you out, it can be hard to understand jittery movement. For that purpose, all camera movements (Grabber and Spectator Controller behaviours) contain settings for smoothing. These smoothing values, which go from 0 (no smoothing) to 1 (stays at the original position indefinitely), will use linear interpolation between the original and desired camera location and orientation to smooth out any sudden movements. I’ve found 0.1 is generally enough, but it’s a personal preference and can depend on context, so adjust as needed.

Next Steps & Considerations

I’ve now got everything bundled up nicely. I’ve got a series of toggleable spectator cameras that can be grabbed, posed with, and presented within the VR world itself. I still need a way to make sure the users know what they can manipulate, without interfering with the spectator scenery. Since I’ve got separate cameras for the spectator and the player, it’s trivial to use the cameras layer mask to create a player-only layer and place instructions there.

It’s important to note that all these cameras get expensive. We draw the whole world twice and then re-render the spectator’s view a third time. Disabling both spectator cameras when not in use would be a useful addition. To do that, turn off both the Spectator Camera and Spectator View cameras and the system will fall back into the original ‘render from the player’s POV’ way of spectating.

And this is where I leave it up to you. There is a grabbable, movable spectator camera, with its own in-game viewfinder and a separate UI layer for both player and spectator. Take it apart, swap out the assets, change the camera switching behaviour and UI, and turn this project into your own. I’ve tried to keep it light and easy to dissect, with environment and visual assets easy to exclude, and there is a minimal amount of custom scripts. This would be an excellent place to start looking into Cinemachine to pick the right angles to maintain a good view of the action. A crafty developer could even add more to the spectator UI and inputs and design a new asymmetric style of gameplay where the spectator can be a real participant.

What would you like to see in a good VR spectator system?

Ya no se aceptan más comentarios.

  1. You’re going to run into trouble if your game uses secondary cameras to render depth from a certain perspective before your spectator camera renders the scene. If your main VR camera doesn’t clear depth before rendering, your spectator camera shouldn’t either or it isn’t going to work. You need a another camera rendering the depth from the same perspective as the final spectator camera to achieve the same effect as the main VR camera. Just a heads-up. (source: had this problem when doing this a year or so ago)

  2. This is great! Is there any chance of lowering the draw call “expense”, perhaps by looking for common objects between the VR headset’s camera and the spectator camera?

  3. What I’d like to see is separate audio listeners for the player in VR and their spectators. As of right now, Unity doesn’t allow multiple listeners, but I think it’d open up some interesting possibilities for asymmetric gameplay. I don’t know if it’s possible for Unity to support it (I get the feeling the Windows Audio service doesn’t manage multiple output devices the way it’d need to), but if it is, I would love to see it.