In this blog series, we will go over every aspect of the creation of our demo “Book of the Dead”. Today, we focus on photogrammetry assets, trees and VFX. This is the fourth blog in the series, take a look back at the last two blogs that go through creating characters and concept art from “Book of the Dead”.
Hello, my name is Zdravko Pavlov and I am a CG and VFX artist with background in VFX, video compositing, editing, graphic design. I’ve been working with Unity’s Demo team since 2014 and contributed various particles, rigid body dynamics and cloth simulations on the demos “Viking village”, “The Blacksmith” and “Adam”.
The “Book Of The Dead” demo was a little bit different. A completely new territory for me, since my role for this project would be to create various environment assets using a photogrammetry approach. Outdoor photography is my hobby, so I was more than happy to handle such a task. Creating trees? I mean, how hard can it be, right? In the following blog post I’ll try to describe everything that I learned during the pre-production and development phase of the project.
The Photogrammetry workflow
Fortunately, at this point, the Internet is full of valuable info regarding that process, so that’s where my learning began. What most of the articles would tell you is that what you need is any DSLR, camera with 50mm prime lens. I didn’t have any of those at my disposal at the time, so I decided to make my initial tests with my 24MP mirrorless Sony a7II with a 16mm-35mm zoom lens instead. And let me tell you right away it works just fine! The wider lens gives you more distortion, but you can always fix that in Lightroom for example, but in fact, it is better if you don’t! The photogrammetry software handles it gracefully. Prime lenses are more rigid and in theory, should give you a sharper image. They are really great if you scan in a controlled studio environment and I highly recommend it in such scenarios. Out in the field, however, being able to properly frame the desired object with a quality build zoom lens will give you an advantage.
I tried out most of the more popular photogrammetry software out there and some of them worked quite well. I chose RealityCapture because of its significantly better performance and ability to process a high amount of photos without running out of RAM. The amount of details it manages to reconstruct from the photos is amazing! I managed to get models, sometimes up to 185 million triangles and successfully export the geometry in PLY format.
That, of course, is more than enough and also a little bit extreme. Most of my reconstructions ended up roughly about 50 to 90 million triangles. At first, I was using GF980TI, but later upgraded to GF1080 which gave me a slight performance boost.
At some point, I also upgraded my camera to a 42MP Sony aRII with a Planar T* FE 50mm f/1.4 ZA Lens. However, doubling the resolution and using the superior super sharp prime lens didn’t give me the “WOW” results I was expecting. For one thing, the longer (and narrower) prime lens means that you have to step a few steps back in order to have the image overlap that you need for a successful reconstruction. That’s not always possible when you are in the middle of the forest, with all the other trees, shrubs and everything. It also means that you have to manage, store and process twice as many gigabytes of image data. But that doesn’t necessarily lead you to higher definition scans. Having more images is what gets you there and having it in 24MP is more manageable. That may sound obvious, but it didn’t occur to me until I actually tried it first hand.
As I mentioned I used a PLY format to export the insanely dense geometry. I prefer that over FBX even though the PLY exporter of Reality Capture didn’t have scale and axis orientation controls so unlike the FBXs, the PLYs were out of scale and rotated. I chose to deal with that because I was getting some errors when baking textures using the FBX. Also, the binary FBX export was implemented later.
Not a lot of software can handle that amount of polygons, so I just stored the file and used RC’s decimation features to make a low poly version of the same model. Usually around 1M triangles. And that one can be opened in ZBrush, MeshLab or any other modeling software, where it can be retopologized and unwrapped. Depending on the model, I used different techniques for retopology. Often ZRemesher and sometimes by hand.
Then I used xNormal to bake textures. xNormal doesn’t seem to be bothered by the hundreds of millions of triangles and handles it with ease. I baked the diffuse texture using the vertex color info. The vertex density in the highpoly was more than enough to produce a clean and sharp texture without any interpolation between vertices. I never used RC’s integrated unwrapping and texturing features.
That being said, if for some reason your dense cloud is not dense enough, or there are some areas missing (like in the image below), projecting a texture from your photos can bring additional detail to those areas.
What most of the photogrammetry tutorials would teach you is that it is best if you avoid direct, harsh lighting and shadows when scanning an object. If it is a small rock, that you are about to capture, you can bring it in the shade or even in the studio and use softboxes and turntables. You can’t really do that with trees though, so I was watching the forecast and hoping for cloudy weather. However, even in overcast conditions, there were some shadows and ambient occlusion. This is solved with Unity’s DeLighting tool. All it takes is a normal map, a bent normal map and baked AO. It keeps the diffuse values intact while removing the shadows.
The resulted assets were then imported into Unity to test the dynamic lighting and shaders.
There are times when it is just not possible to capture every single part of your model. Either there’s an obstacle and you can’t get all the angles. Other times you are in a hurry or your battery is dying and you miss something and you don’t realize until you get home and start processing the data. I made a lot of mistakes like that, but then I was able to salvage some of my work by using Substance Painter to clone stamp and try to fix the missing data.
The actual game assets
For most of the duration of the Book of the Dead production, the Demo team didn’t have an environment artist on staff and we were looking to find one. Some work was contracted out to an external environment artist, Tihomir Nyagolov, who had done the initial explorations and white boxed the environment, but the main load of the work fell on the Creative and Art Director, Veselin Efremov, and myself. Each of us would go out to our nearby forests to capture photogrammetry data, and the work naturally transitioned into producing the final game assets that were needed. I don’t have a background in environment art, and I had zero experience in dealing with game optimizations, LODs etc. At that point there were some placeholder trees already created by Tihomir with the help of GrowFx, so I took over from there, learning as I go.
GrowFX proved to be really powerful and versatile tool for creating all kinds of vegetation. It interacts with other objects in your scene so you can achieve all kinds of unique and natural looking results. It isn’t exactly built with game assets creation in mind, but it is controllable enough and can be used for the task. It is a 3DS Max plugin. I’ve been a 3DS Max user for 20+ years and I really feel at home there. Unfortunately GrowFX relies on some of the outdated 3DS Max components like the curves editing dialogs, which aren’t very convenient, but it still was a good tool for the task at hand so I just had to deal with it.
The forest in Book of the Dead was intended to be primarily conifer. There are some beautiful forests and parks near my home, so I went on a “hunt” and scanned some of those. Then I proceeded with stitching my GrowFX creations onto the scanned models. The final tree trunk was composed out of scanned geometry and unique texture for the lower part stitched to a procedurally generated trunk with tileable texture for the rest of it, all the way to the top.
A small patch of the bottom was clone stamped to the top of texture to make it tileable
It is one thing to do photogrammetry on rocks and tree trunks, but scanning pine needles is a whole new deal. This is where Quixel stepped in and provided us with their beautifully scanned atlases. They collaborated with the Demo Team and did numerous small assets like grass, shrubs, debris, etc. specially created for “Book Of The Dead”.
As I mentioned in the beginning, my background is in CG productions and I’ve made large forests before, using Multiscatter or Forest Pack Pro and rendering in V-ray. In such tasks, you can use the Quixel Megascans atlases as they are, but for a realtime project like Book of the Dead we needed to do some optimization. It included building larger elements (branches, treetops etc.) and arranging those into new textures, transferring the initial scanned data for the normal maps, displacement, transmission and so on.
The existing Megascans normal data was slightly modified to give a fake overall volume impression.
I used different normals editing techniques such as Normal Thief and other custom built 3DSMax scripts to blend the branches with the trunk.
Altering the vertex normals so that they can blend with the trunk
Using this approach I was able to produce different types of pine trees.
We wanted the forest to feel “alive” and the wind was a crucial element for us. The trees were set up for our vertex shader based wind animation solution by our Environment Artist Julien Heijmans.
There are many different ways of creating a vector field and I looked up several different options. Being familiar with Chaosgroup’s fluid solver, PhoenixFD, I decided to see what kind of usable data I can get out of it and bring it into Unity. I was able to export the scene geometry, bring it in 3DS Max as an FBX and run some fluid through it, that swirls around the vegetation and creates the turbulent wind effect. The bigger trees were shielding the smaller vegetation and the effect there was less prominent.
I looped the simulated sequence using the integrated PhoenixFD playback controls.
The vector information was then read through a PhoenixFD Texmap, normalized and plugged as a diffuse texture over the procedurally created isosurface.
The rendered image sequence was then imported back in Unity, where the final texture atlas was assembled. I used to do that in After Effects in the past, but now Unity has a very convenient Image Sequencer tool, that can do that pretty much automatically. It is one of the new VFX tools that is being developed by Unity’s GFX team in Paris.
The created texture atlas was placed in the scene. I made a simple box to define my simulation boundaries and used that as a position reference.
To be clear, this was an experiment that allowed us to push the visuals of some of the shots in the cinematic teaser that we showed. It’s a method that I can recommend if you are using Unity for film production. It plugs into the main procedural vertex shader based wind animation solution, which was developed for the project by our Tech Lead Torbjorn Laedre and was used in most of the scenes of the teaser, as well as for the console version of the project that we showed at GDC.
In an upcoming blog post, Julien and Torbjorn will explain more about how we handled the Wind and the final solution we adopted.
I started to block some of the ideas about the Hive early on.
After the initial design, I started building various game ready elements in order to build the Unity assets.
For the screwies crowd, I did some exploration for the body variations. Again I used Chaosgroup’s PhoenixFD and ran a fluid smoke simulation. Then I cut out the screwie shape and created an isosurface based on the fluid temperature
Some shape exploration made with PhoenixFD
This method allowed us to quickly preview different shapes and it was used as a general reference. The final screwie character model was created by Plamen (Paco) Tamnev and you can read all about it in his incredibly detailed blog post.
The dripping sap effect
To achieve the dripping sap on the screwie’s face, I used PhoenixFD again. I started with making a little proof of concept showing the capabilities and what we can achieve with a dense viscous liquid.
I was quite happy with the overall result and the fluid motion, so I proceeded with setting up the real model. The goal was to prevent the simulation from forming too many separated pieces and droplets.
That allowed me to get a single frame from the generated geometry sequence, retopologize it, make UVs, and use WRAP3 to project it over the rest of the shapes in the sequence. As a result, I’ve got a series of blend shapes that use the same topology.
I also tried running a sap simulation over some of the tree trunks.
We didn’t end up using those in the final project. However, I still find it as a nice way to add some detail over the scanned models.
Stay tuned for the next blog post in the series. We’ll be diving further into the environment art created for Book of the Dead with Julien Heijmans.
Meet us at Unite Berlin on June 19 to walk through the Book of the Dead environment on a console yourself, and attend Julien Heijmans’s presentation about Environment art in the demo. See the full schedule here.