Search Unity

As Unity engineer Lucas Meijer said during the Unite Europe keynote, the kinds of VR experiences and interaction models that will set standards in the near future have yet to be invented. Which makes now a perfect time for fearless and visionary artists and developers to jump in and help define this emerging genre.

Brenden Gibbons is a Narrative Designer and student at NHTV, Breda. He jumped into VR filmmaking without a life vest when he made Dyskinetic, a VR live action short film for the Oculus Rift that screened at the international film festival GoShort 2014 in Nijnmegen. He came out of the experience with more problems than solutions, which is why he gave a talk about it at Unite Europe: in the hope that others in the audience would be inspired to solve some of the challenges he encountered.

The current paradigms don’t work

“There are so many things left to do that I can’t do, and it was such a fractured experience to make a VR film. I probably used about seven different types of software to pull my film together, including separate ‘housekeeping’ software to rename files in bulk, or to convert the footage into OGG files for Unity. The workflow I used to turn the footage into frames was especially clunky; we need better solutions.”
“VR is still a future piece of technology, at least regarding entertainment content. The ability to put something on your head and feel like you are somewhere else? That sounds like we’re in the future! But we’re using current paradigms to design for this future. We keep asking questions like ‘how do we port an FPS to VR?’ Or the problem of presence. Today we still find it difficult to design an app or make a film that just uses this concept of presence in VR. The real challenge is in figu
ring out how to make pure, VR only content.”

Brenden's workflow to get his film footage into Unity

Brenden’s workflow to get his film footage into Unity

“Presence is the difference between knowing and feeling”

Paul Hannah comes from a different background then Brenden, but just like him he’s trying out ways to develop VR content that feels real. He’s a Lead 3D simulation engineer at Airbus DS Newport in the UK. He spoke at Unite Europe about creating Heliskape, a VR helicopter simulator that allows the user to experience flying over London within the cockpit of an Airbus EC135 helicopter.

“We thought to ourselves, how can we create a sense of wonder? The input mechanism plays a key role in that, in creating presence. For Heliskape, we looked at designing the right physical controls that move with your hands just as they would in real life if you were flying a helicopter. For example, a physical control to move the throttle forward can get you much closer to presence in the virtual world.

He recounts how a student who tried out Heliskape experienced the sensation of vertigo even though she was sitting in a chair. “She felt as though she were flying high up in the sky; she was blown away by the experience, it was just brilliant.”

“I don’t think we are far off from being able to more widely develop true VR experiences. VR has moved faster than any other new input medium. The spatially aware controllers that Oculus and Valve are working on could be a big breakthrough.”

Innovating with Unity

As part of the Unite Europe keynote Lucas Meijer talked about the new features in Unity 5.1, including the out-of-the-box support for Oculus Rift and Gear VR. He said the way forward with VR is “to try many different ideas and see what sticks. Unity is the best development environment to experiment with VR, to fail fast and just try hundreds and hundreds of things on this search to find the experiences and interaction models that will work.”

At Airbus Paul and his team use Unity mainly to prototype. “It helps us to innovate quickly, to try out the ideas in our head, to see how it could work on multiple devices.” He says Unity 5 has greatly streamlined the artists’ workflow, especially with the Standard Shader and Real-time Global Illumination.

Brenden says that bringing his VR film together in Unity was “super simple. I used the footage as a texture feature in Unity 5 and just dragged the prefab in and it worked.”

footage as texture

Setting up Dyskinetic in Unity

Cross-industry exchange of knowledge is key to shaping a successful future for VR

Paul says in serious games “we have to embrace the real-world, whereas game developers often create their own fantastical world. Our approaches are different but we can and should learn from each other because VR is going to change games, films, how people buy real-estate and choose holidays; it’s going to change everything.”

“VR film is a real mixed-bag of disciplines,” says Brenden. “You can learn from interactive theatre about how to position your actors; learn from film about how to frame your shots; learn from game development for level design, for example, how to place your lights, how to use color, how to animate elements in the film all so you can force the viewer to look where you want them to look in your film. Be inspired by everything! It’s unexplored territory; we’re creating all-new styles of storytelling.”

For more info on VR development in Unity, watch these the talks from the popular VR track at Unite Europe:

14 replies on “VR is a wild place. Time to plant your flag.”

We need proper video playback support for Unity/mobile dev.

There are a ton of video creatives getting into 360 video without much coding experience, where Unity would make a lot of sense for those creatives to make 360 video VR apps.

VR is a puzzling thing for me. Its generally not new – yes, its cheaper now, but 15 yrs ago when I was working with SGI based solutions for virtual soldiers and virtual environments we were running good vision with 1024×768 per eye and some of the best tracking around (yes, it was very expensive). A huge amount of research came out of that era (especially in defense) and much of it is now be “re-visited” as if it never happened, and that many outcomes from those years have been just forgotten .. or maybe lost I don’t know. In general VR is _not_ applicable for all forms of interaction, in fact it effectively only works well for seated ‘cockpit’ styled applications. Interactive movies with VR have been around for just as long.. ok.. its a great toy.. but the experience is only available _per_ device, so its a very expensive way to share media (which is what media is all about) among people.
My view.. temper your ideals on VR. Look more toward AR, because this is truly new – its only recent times that this has been achievable with acceptable graphical artifacts. And in the long run, VR can be emulated by AR simply by turning off the camera input. VR cannot become AR without additional hardware and software. AR is where I would put my money.. its where all consumer products will eventually go. (imho)

Ivan Sutherland must be having a good laugh at VR as ‘new-tech’.
Apart from games – and a minority of dedicated enthusiasts, VR is a great aid for ‘serious’ endeavours such as architectural concept design development. However, I would agree that AR will be the way to go as long as the hardware is light, minimal and wireless. The consumer market won’t be sharing developers’ enthusiasm for the current range of tethered VR headsets so that hardware will need to evolve considerably prior to launch.

I think the price difference is the game changer. Touch screens were around long before the iPhone, but after Apple capitalized on it and made it main stream, touch took off and now it’s in everything. I think that once VR can become main stream, there will be more opportunities for innovation. Like social interactions, which is why Facebook bought their way in. Also, the graphic abilities are so much better now that VR can be a joy to use, and really make you feel like you’re in an alternate universe. AR still seems very limited to me. Because you’re limiting the software to fit your living room. And right now I think a dedicated VR device can do better VR than a AR device that can do VR. So, I’m excited about the potential of VR right now. AR will see plenty of success as well.

I really hope Unity adds out-of-the box support for Cardboard as well! It has grown to make SO much more sense than Gear VR.

Google Cardbooard is very impressive, and works great with Unity and iPhone. (And Android I’m sure.) I had to add my own neck model, but the portability of an iPhone-based solution is outstanding, and the detail/speed I’m able to achieve has been surprisingly good. Tracking is rock-solid. I’ll release a free demo app in a few weeks, but there are a few demos out already. (Official support for iOS is new this year.)

Amazon is loaded with Cardboard-compatible viewers, made of plastic, with a strap, NOT actually cardboard, but still very cheap. And an iPhone 6+ costs $750, making the whole rig maybe $800 or less. (But Carboard V2 is different–and some headsets have a narrower FOV. So shop carefully. I still haven’t settled on one to recommend. But I built my one from Legos.)

If you like cardboard, you’ll probably like Wearality Sky too (
It’s a phone-holder headset too, but really advanced fresnel lenses that claim 150 deg fov!
Although this claim seem to require a 6 inch phone screen, it blows my mind that we can have such a large fov in a portable form factor.
Especially since the cardboard (1) fov is so incredibly tiny…
Looking forward to receiving my Wearality Sky soon, seems they sent out the kickstarter rewards on the 20th.

I think one thing that’s still largely missing in VR is properly using the 2D-monitor. First we had the distorted “two-lenses view” which was distorted and really weird. Now we’re moving towards a more “normal” view, somewhat distorted or undistorted, still showing what the headset renders. But in many cases, watching that actually is quite confusing and in some cases even a little nauseating. For videos and people not currently wearing the headset (like in a social setting where you play together with a few friends, one using the headset, the others watching them play), that’s not really satisfying.

For Holodance, I’m currently using up to three cameras rendering to the 2D game window: The big one that fills the whole area is actually a third person view – which I think is more appropriate in many cases. This does render the headset which obviously isn’t visible to the players wearing the headset. Then there’s a small area on the bottom left rendering what the player with the headset sees (this is useful – but by making it smaller, it’s not as annoying as watching this fullscreen). And there’s another (optional) one on bottom right that shows a “physical world” camera. This wouldn’t make sense in social settings but is useful for videos. In this specific game, those views are also available in VR, kind of as screens that they can look at (especially the physical world camera can sometimes be really useful to make sure you don’t run into stuff ;-) ).

Here’s a video recorded with this setup: Holodance Prototype – Gameplay 03.

Another interesting thing is how to handle combined 2D and VR GUIs – but that’s another topic ;-)

Ah, one final note about the video: I’ve already slowed down the turning on beats of the 3rd person camera because having it this fast/intense kind of is exactly what I wanted to avoid by not showing the headset image in the large screen area ;-)

I really like this setup !
What is the pipeline ?
What about performance ?

Can you explain more in-depth how to achieve this ?


Sure! Glad you like it!

I also have an improved video that specifically highlights this kind of setup. There’s endless possibilities, of course: Holodance Game View Demo. I might also add a video that shows how this is done in the editor. It’s not rocket science ;-)

The major key is that I’m using the SteamVR Unity Plugin. Unlike the native Unity VR integration, this uses a much more “Unity-like” approach where you can add components to game objects to get a specific effect (instead of assuming that ticking a checkbox to active VR will do anyone any good). So there’s SteamVR_Tracked, for example, which makes a game object a tracked game object (you can then select what device is tracked, e.g. HMD or a specific controller). Or SteamVR_Camera, which makes a camera render to the headset.

With SteamVR_Camera, there’s a button to expand / collapse the setup and when it’s expanded you have Origin, Head and Eye (I’ve already requested a mode where they only create Head and Eye which would be more compatible with the native VR integration that Unity comes with). The “Eye” camera is the one that gets rendered to the game view (I don’t have the project on this computer – might have also been Head but I think it was Eye), and you can set this up however you like. The only thing that didn’t work for me, yet, was to disable it, which I hope can be done somehow if you need maximum performance and no rendering to the 2D game window.

For the in-VR planes that are kind of like “screens” I’m using RenderTextures that I put into the materials of those planes. To avoid having to re-render everything too many times, I’m using those same render textures also for the cameras in the 2D game window. So that’s cameras that just render a plane which has a RenderTexture in its material.

The performance very much depends on what you do, of course. The more cameras you have rendering more stuff, the longer it takes, that’s just simple math.

That’s something where Unity’s native VR integration really shines because there, rendering to the 2d game view has zero impact on the framerate (I was really surprised but they’re just copying the image from the headset onto the screen which is very fast). This is definitely something that should be available as an option but maybe with ways to control how much of the screen this actually consumes if you want to have additional cameras rendering to the 2D game window. Either way, I really want the full flexibility and even with the complex setup that even has a Kinect One as camera which is rendered to another render texture, it’s still 90+ most of the time (it’s not a particularly complex scene, though, so your mileage may vary).

Btw, to grab the video from the Kinect One, I’m using Kinect v2 with MS-SDK … but I guess any camera-to-RenderTexture package would do (I’m not really using any “Kinect features” ATM – its tracking is slow and not particularly precise).

Feel free to join the Holodance Vimeo Group – I’ll be posting regular updates there and it will also be a place to discuss.

One thing I’m quite excited about that will be coming during the next few days is a combined 2D/VR-GUI approach. I posted a bit on this already on the forums in this thread: Both Oculus and normal game 2-in-1.

Interesting approach man.
I was recently thinking along the same lines when watching people use the vive on youtube.
My reaction was that these video’s should be recorded in front of a greenscreen and then key in the virtual world. That would make such a video so much more engaging.

Comments are closed.