Search Unity

More stories from the adventures of an EMEA field engineer! Today, I wanted to share with you the development of Pollen VR. Mindfield Games are using Unity 5’s new graphics features and I have been able to follow their development of Pollen VR very closely.

I spoke with Ville Kivistö, CEO and Co-Founder of Mindfield Games, who is also their technical coder.

What Unity 5 features are you utilising for Pollen VR to achieve such high visual quality?

New physically based shading and realtime lighting are the definitive features of Unity 5 and we are using them to the fullest. We have material maps basically for every surface and due to design restrictions, environments need to be dynamically lit. With the new global illumination everything looks gorgeous and combined with reflection probes, the surfaces really come alive. Having a 64-bit editor is also crucial to have as developing a high end PC game can consume huge amount of memory. We used to have tons of out-of-memory crashes with Unity 4, but those days are now long gone.


What techniques did you use for the foliage?

Foliage is a rather simple GPU particle effect. What makes it interesting though is how we got the lighting to work with it properly. As Graphics.DrawProcedural doesn’t integrate into Unity’s lighting passes and new CommandBuffer API doesn’t support compute buffers, we had to do a somewhat funky solution.

We have a cube the size of foliage bounds, so we know that whenever the cube is visible, foliage needs to be rendered as well. Now, whenever the cube’s OnWillRenderObject() is called, we render the compute buffer particles to two render targets in a single pass with MRT and using the settings of currently rendering camera. One texture has diffuse and roughness data and the other one has normal and depth. When we get to rendering the actual cube, the cube shader just gets those buffers as parameters and outputs corresponding data. Depth is written manually, so we get perfectly Z clipping output. And the lighting of leaves is affected by all lights and shadows to make them look like we want them to.

And because all the leaves are GPU particles, animating them is really cheap. So they aren’t just a static mesh, but can react to environment realistically (naturally with some limitations).


Any techniques or workflows you can share with Global Illumination?

For us it worked very well straight out of the box. Default values provide a good balance between bake times and quality. On day to day use we have quite conservative values to enable quick baking, but our automatic build system scales up the values to provide even better quality GI for our nightly build.

Are you using a mixture of realtime and baked lighting?

Our game design requires that all lighting is realtime. We don’t have a single static lightmap in the game.

In Unity 4, we used to work around that restriction by placing dozens of ambient point lights around the scene, trying to fake global illumination that way. In the end it was horrible, because the maintenance and designing was very tedious and time consuming. It was also pretty much impossible to keep all the lights from leaking through the walls.

Even the amount of draw calls and fillrate requirements caused by many lights was starting to be quite high, which in turn hampered the framerate. When we got our hands on the Unity 5 beta, we just enabled Enlighten and removed all the fake GI point lights and everything looked better, ran faster and GI worked perfectly in real-time when disabling or animating lights.

We also had a custom cubemap reflection system built, with box projection and everything. It worked rather well, but using it required dozens of custom shaders and the editor side was constantly missing features. We are very happy that new reflection probes basically replaced our own system. They require less maintenance and the workflow is much simpler.


How did you achieve the glow effect in the corridors?

We have a few different volumetric effects that we use throughout the game. For spotlights we like to use Robert Cupisz’ implementation, which gives very nice volumetrics for spotlights that perform very well. As spotlights are very versatile, it’s easy to use them to height based fog as well.

In some parts of the game you might see some fluid volumetrics for which we use Fluidity from the Asset Store. We use it in all scales, all the way from lighter fire to filling the room with gasses. It looks awesome and physics simulations always look yummy.

For outdoors, we use our custom post process volumetric fog solution as we want the player to feel the density of Titan’s atmosphere.


What tools do you use to generate your PBR Textures?

Adobe Photoshop and Quixel Suite along with some reference materials. We have our own “playground” scene where artists can inspect models in various lighting conditions.

Have you made modifications to Unity’s new Standard Shader? If so, any tips’n’tricks?

As our base is not tied to a single point in time and space, we need to be able to render the base differently. As we’re a small indie studio, we don’t have resources to start working on multiple assets that look different based on how old they are. Our solution was to do an extended version of the new Standard Shader that adds “grittyness” to materials. What’s best, it works for all objects and with the new material pipeline, we can have custom grittyness based on material type. Just one shader and few lines of code and we can change the look and feel completely.

Currently Unity doesn’t support code injection to Standard Shader, Our solution is to just copy the built-in shader and keep our code in include files, so we just have to write couple #includes into strategic places if the Standard Shader changes.


How did you deal with Anti aliasing?

As MSAA doesn’t cope that well with deferred rendering, we have to comply with post processing solutions. The one we have chosen is SMAA as it provides a nice and clean resolve with good performance. Even if it lacks the subpixel and temporal anti-aliasing, the final result is good enough even for Oculus Rift.

Unity 5’s new graphics is high-end, how has performance been?

Unity 5’s new MRT based deferred path basically increased the framerate about 30% when compared to the legacy two pass deferred path. Our draw calls can climb quite high and skipping one pass helps a lot to keep those calls down. As virtual reality is very important to us (we recommend playing Pollen with Oculus Rift), it’s crucial that framerate can be as high as possible. Therefore, we provide as many options as possible for tweaking the visuals to match players’ hardware and framerate requirements. With maximum settings Pollen can really turn on those GPU fans, but you can scale down the options a bit and play with an older GPU with Oculus Rift if you like.


Any advice on the optimisation for such a large scene?

Unity’s occlusion culling is very efficient and it handles most of the things we want. However, occlusion culling deals only with the rendering and does that only in play mode.

As we have spent lots of time to make everything behave physically correctly, be it books, basketball, microwave or punching bag, we have huge amount of physical objects in the game. Unfortunately, Umbra doesn’t help us much with physics, so we had to write our own custom portal/culling system. Because the rooms of our moon station are efficiently separated by safety doors, we’ve been able to write a simple portal system based on those doors. We simply disable all the rooms the player can’t see. This helps with physics and even with Umbra as it has less culling work to do. In editor, we can also easily activate only a single room to keep draw calls and poly count low and editor responsive.

Out the box Unity 5 handled everything else for us, we didn’t need to do any additional optimising for our large scenes, everything just worked!

Thanks to Ville for talking to me, can’t wait to see this game released!

10 replies on “Pollen VR: Developing high-end visuals with Unity 5”

“Currently Unity doesn’t support code injection to Standard Shader”

I thought current betas of Unity 5 supported making your own surface shaders that use the PBR lighting model?

Not really. Surface shader is basically just an empty shader that takes lighting into account. Unity’s Standard shader has much more functionality, like optional features/texture inputs that are dynamically set depending what parameters the user has set.

Now our “grittyness” shader just a Standard shader with few lines of additional code so we can easily replace all surfaces with our gritty shader. If we would have a surface shader instead, we’d need to do as many versions of those as we have variations of different Standard shaders in the scene (diffuse, diff+normal, diff+height+ao, …) and that’s definitely something we don’t want to do.

how many people are working on such a project? Any problems with scene merging? I was so glad to read about that problem yesterday

Our development team is 9 head strong. We’re small enough that we can “reserve” the scene when we need to place things into the scene. Things are mostly prefabs so we usually don’t get that many conflicts as people edit prefabs instead of scene file. I’m really glad about scene merging though :)

any user of unity that can make a game type dead frontier with another style of zombies and maps with more variety DX?

Some nice looking work there. It’s good to see U5 getting it’s legs streched.

I’m now thinking about dropping LightShafts into a project I’m work on.
Did you have to do any work to get it playing nicely with U5?

Thanks :)

IIRC lightshafts converted just fine to U5 automatically. We’ve made some additional modifications, like colored cookies and different hooking for cameras, but otherwise it’s pretty much the same as in git repo.

This looks gorgeous! Can’t wait for the final game! Bookmarked!
And of course some very handy tipps’n’tricks, thanks:)

Comments are closed.