Search Unity

Making of The Heretic: Digital Human Character Gawain

June 10, 2020 in Technology | 22 min. read
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

Gawain is the main character from The Heretic, the real time short film made in Unity, written and directed by Veselin Efremov. This article will cover the creation of the character and give some insight into the different aspects of his production.

Casting and Production

We worked with a casting agency to choose the actor who would perform the role. This was the first digital role for actor Jake Fairbrother. You can normally see him in theatrical plays in London.

The performance took place on several separate occasions. We started with a body scan at 4D Max , together with a 3D scan of the face and a first batch of 4D performance at Infinite Realities at their studio outside of London. We continued with capturing body performance at our mocap studio in Sofia, and later returned to Infinite Realities for additional 4D performance when we knew that we could scale the amount of screen time it is viable for. Voice performance was captured at SideUK studio in London.

Concept Art

The project started with some early concept explorations by Georgi Simeonov. He tried different styles based on his initial discussions with Director Veselin Efremov, with some elements that were essential to the story, like the briefcase for example, being present in almost all of the versions.

In the second stage, some of the ideas from the initial exploration were developed further and became more focused after Georgi and Veselin discussed what was working from the previous sketches. One thing that is interesting to note here is the subtle implementation of the medieval knight theme in the design of Gawain’s costume.

The final version of the concept sketch for Gawain. Some things changed as we moved along, but we tried to stay as close as possible to the original design.

Head

Screenshot of Gawain from the final film inside of Unity.

Paco: After we received the initial scan and the cleaned neutral pose of the face from Infinite Realities, we had a meeting with our animation director Krasimir Nechevski to figure out some of the technical details that we needed to clear up before continuing with the outfit and animations, things like the uv layouts for the face, the different texture sets and how and where we split those, also choosing where to split the head from the body. This last one was especially important as the director Veselin made it clear from the beginning that he wanted to see as much of the neck and the area around it as possible in the closeups that he was planning with the 4D capture of the actor’s performance.

 We had to be carefull with the distribution of the textures sets also because they had different resolutions, for example the body and legs had a much lower resolution compared to the face, mostly because we don’t see them pretty much anywhere, but we choose to have them just in case. After all of that was decided on, we transferred and tweaked the scanned data onto the new model and made some adjustments as we moved forward.

Eyes

The eyes went through a lot of polishes and tweaks to get to where we needed them to be, a lot of that creative guidance and drive came through the director Vess, who served as a reality check on what could be improved with them. 

The tech for the eyes was made by Lasse Pedersen with some help from Nicolas Brancaccio. The eyes used a single mesh for the cornea, iris and sclera, the shader controlled many features related to the eyes directly inside of Unity. We also had a mesh around the eyelids controlling the smoothing of the normals between the eyeball and the eyelids to give us a softer transition, it also served as a tearline mesh.

The mesh used to blur the normals and to wetness around the eyes.

An example of some of the controls that the shader gave us, in this case the AO of the eyes instead of having to use a separate shadow mesh for that with a baked texture.

Teeth

For the teeth Lasse Pedersen added an option to control the shadowing of the teeth inside the mouth, this really helped with the shading in the closeups.
A quick model and texture that I did based on reference from the actor, since we didn’t have a scan for them.

The technology stack used to bring Gawain to life, the shaders and all tools mentioned in this blogpost can be found in the Digital Human package we released recently. If you want to learn more information about the tech aspects of Gawain, stay tuned to this blog, we’re working on another article that goes more in-depth on the skin attachment system, shader, and other technical details.

Screenshots from Marmoset Toolbag 3 of the finished asset before getting it into Unity, it was a quick way of testing the materials under different lighting conditions during the texturing part of the process.
The UV Layout we used for the character. I’ve tried to be as efficient as I can so that we can have as much detail as possible with the 4k texture.

Now Krasimir Nechevski, our animation director will explain more in-depth the process behind the facial performance for the character of Gawain.

Facial performance

Krasimir: Making a digital human pipeline was one of the main goals of the Heretic and a major accomplishment for the team. We have been avoiding it in the past by making robots or nightmarish creatures, but it was time for us to give it a go. There are multiple challenges in achieving this- there is the skin, hair, teeth and eye shading each with a very different and difficult set of problems, but the hardest part of making a digital double in my opinion is reproducing the facial movement with all the subtleties. It is a well-known problem and falling short usually leads to an awkward feeling in the viewer a.k.a. the uncanny valley. 

There are many ways of animating the face of a character- blendshape rigs, 4D (volumetric video), machine learning, simulation, all with varying pros and cons. We chose a somewhat unorthodox method so here I will try to chronologically explain our reasoning and process. To sum it up, we decided to use 4D directly and add only the fine detail wrinkle maps from a rig.

It is worth noting that lately machine learning approaches of processing 2D video have been very successful at achieving convincing results and there are some examples that manage to produce incredible results by synthesizing facial performance in 3D. Based on this it is safe to assume that ML will solve facial performance in the future. But an important aspect of ML is data, a lot of data. Acquiring clean 4D sample data is essential, so we can view 4D as a milestone for achieving fully synthesized facial performance with machine learning.

First we needed a proof of this concept, so we decided to make a very short segment of facial animation with the condition that if it fails we should be able to finish the movie without it. We started by doing the first capture session at a vendor [Infinite Realities] which has been developing a 4D capture system and achieving amazing results.

Even though the system produced one of the best results at the time, there are some challenges that come with using 4D. Firstly it uses photogrammetry and there are some imperfections to the method that limit the quality. The major obstacles are usually due to an occlusion of the surface of the skin by either hairs or visibility, there is a certain amount of micro noise, reflective surfaces produce a lot of glitches, the head needs to be stabilized and lastly there is no temporal coherence of the meshes between the frames.

The raw mesh
Texture of a 4D capture

Above you can see how the raw, decimated data looks like and how every frame of the volumetric video is made up of random triangles that are unique to it.

Luckily there is a solution for that- a software called Wrap3D and is developed by [Russian3DScanner]. This tool is usually used for creating coherent meshes for blendshape based rigs. For most of the time during our initial research we tried cleaning the data ourselves with Wrap3D. It works by utilizing a set of small dots on the actor’s face and using those as markers to wrap the same mesh over all of the frames and thus achieving consistency between all frames. You start by wrapping the first frame and then with the help of the markers visible in the texture you wrap the first frame on to the second frame and so on.

The markers on their own are not enough though, since there is quite a lot of error when putting those manually. To fix that there is a feature in Wrap3D that uses optical flow and by analyzing the texture makes the match between the consecutive frames pixel perfect. After projecting the textures for each frame the result is a stream of meshes with the same topology. With that out of the way we had to deal with the remaining imperfections like noise and replacing damaged sections by transplanting them from healthy meshes. The lead programmer involved in the 4D processing- Lasse Pedersen - developed a set of tools for importing and working with the data inside Unity.

Even though the result was great it still lacked micro details because the processing and noise removal somewhat smooths the surface and loses the pore level details. We knew it could be pushed even further by adding fine details which are animated. To achieve this we used a FACS based rig developed by SnappersTech, which had the same topology as our 4D. Lasse developed a solver that managed to give accurate activations of the wrinkle maps from the rig adding this level of detail back. Here is an example of a later stage of our research.

Later all of the mesh cleanup was done in DCC, but the tools Lasse developed have great potential and other uses. If you want to know more about that, Lasse is currently writing another blog post where he describes all of his work in-depth. The tools are also included in the Digital Human package we released recently.

By a lucky coincidence not long before our deadline for the first part of the project I met the guys behind Wrap3D at a conference and they agreed to collaborate. It was hugely successful and they delivered the cleaned 4D for our initial test extremely fast and with excellent quality.

After seeing the final result we were more confident than ever that it is a path worth exploring further. It was still far from perfect, but it did not feel uncanny. After the test was done and our pipeline proven we decided to add many more closeup shots with facial performance in the second part of the project relying completely for the 4D processing on our partners at Infinite Realities and Russian3DScanner. They also continued improving their tools and equipment and delivering even better results.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

To achieve our final result by adding wrinkle maps we needed a really good facial rig. We planned on using it for facial performance that was further away from the camera. 

FACS based rigs are a mainstream approach for solving facial performance. They are inspired by the so-called FACS (Facial Action Coding System) developed in 1978 by Paul Ekman, which is a common standard to systematically categorize the physical expression of emotions, and it has proven useful to psychologists and animators alike. A FACS based rig mixes hundreds of blendshapes of extreme poses for each AU (action unit) which is roughly every muscle of the face. Often adding some of these shapes together produces incorrect results which are fixed with the so called corrective and intermediate blendshapes. The result is a very complex system which is then usually controlled by capturing the performance of an actor with an HMC (head-mounted camera) and solving which blendshapes need to be activated.

The facial rig in Maya viewport

To animate the eyes Christian Kardach developed a tool that used a computer vision approach to track the irises from a render in Maya.

Other issues with 4D worth mentioning is combining facial performance and body performance. The system for capturing high quality facial performance is very big and has a narrow useful volume. The actor needs to perform sitting with a very limited range of motion for the head. Later when we shot the motion capture I had to create convincing movements for the body that fit as best as possible. It would have been best if there was a way to capture such high fidelity 4D with a head-mounted device, but such technology is still not available.

Body

Preview screenshot of the full body of Gawain during the production

Paco: We used the body scan of the actor as a base for building the outfit for the character of Gawain in Marvelous Designer. We prepared a proxy version of the body that was easy to work with, especially when it was time to simulate the jacket with the many animations that Gawain had, it was only necessary to have the main shapes that the jacket would interact with like the bag on his hip and the shirt’s overall silhouette.

Body rig and animation

Krasimir: Gawain’s body rig is composed of a few layers on top of each other. The main tool for animation and motion capture cleanup was Motionbuilder. At the base of the rig there is a skeleton compatible with both Motionbuilder and Maya.

The Maya version of the rig had an additional deforming rig layer, which added twist and fan joints, a double knee setup and other details. The Snappers rig was referenced in the Maya scene which allowed for it to be safely iterated without affecting the main file.

For the first part of The Heretic we did the motion capture of the actor at our internal studio in Sofia. For the second part we used the help of a motion capture vendor TakeOne.

Jacket

Paco: I’ve used Marvelous Designer for pretty much all of Gawain’s outfit, except for the shoes. All except for the jacket, were built with the traditional pipeline of making the base for the high poly mesh inside of Marvelous and then polish and texture it as a low poly asset that was skinned to the character. 

We initially tried to simulate the jacket in real time with Caronte, but after many attempts it never felt quite right and it wasn’t what the director initially hoped for. I began making a few tests with simulating the jacket inside of Marvelous and at this point Vess had to make the tough decision of scrapping the work that we had done with Caronte so far, it was obvious that the trade off in quality was too big compared to the output that we got directly from Marvelous Designer.

The final sewing pattern for the jacket in Marvelous Designer. This was the mesh with the final resolution that was used for the low poly simulation. I exported it as a triangulated mesh to 3ds max where I did a custom UV layout and textured it in Substance painter after that. 

At this point I had a textured model with the custom UV’s and the original model in Marvelous both triangulated and matching vertex for vertex. I used the original untextured version in Marvelous for the different simulations and then I used a skin wrap in 3ds Max and used the exported simulation to drive the textured custom mesh that I did before that.

I ended up going with this approach, because it seemed like a relatively safe and non destructive workflow, since I had relative freedom to do adjustments and have consistent results.

For the topology of the jacket I used the triangulated topology from Marvelous in order for the jacket to deform in exactly the same way as it would there.

Preview of the simulation on the final textured model

The first iteration of the jacket removal that I did in Marvelous, using the simplified simulation proxy of Gawain. There were still a few small kinks to work out at this stage, but it worked as a proof of concept. We had some back and forth with Krasimir on how to go about it and after he took off his own jacket a bunch of times he came up with the animation that we ended up using for the simulation.

The very first iteration of the outfit that we had, a lot changed especially for the jacket. This was the version that we tried to simulate with Caronte, so we left it a bit bulkier and with no prebaked wrinkles and deformations, as those should have come naturally from the simulation itself. Another thing that changed a lot was the collar of the jacket. What we ended up with was less bloated and it worked a lot better in all of the shots.

For Veselin it was important that the character should have a more open design for the shirt’s neckline, especially for the closeup shots, where having unnecessary details in the way could take away from the actor’s performance. In the above shot is the first iteration of the shirt that I did based on Georgi’s designs, we tried a less typical design for the shirt’s neckline, but we failed to realize that it would turn out to be a bit of a problem in some shots, especially the ones with lower camera angles, so we had make a quick adjustment to it after seeing it in context. 

Other than the neckline, the design remained mostly the same, with some small balancing tweaks of the design to accommodate the new silhouette of the border.

It’s a similar story with the additional equipment on top of the shirt, after seeing more and more shots with it, Vess realized that less is more in this case as well and the cleaner look helped a lot with some of the last shots of the film, where we had the character without the jacket.

For those shots at the end of the film it was crucial to have a clean design that is readable and helped to drive the focus towards the face. Also having a red shirt where the heart of the white golem would be at the very end is a visual that the Vess was very keen on having from the very beginning.

Paco : The pants and the leg pouch, they went through some additional tweaks later on, but overall they remained pretty close to that initial setup.

The knee pads on the pants went through some revisions as well, the idea that Georgi had for those in the concept. was for them to vaguely resemble a medieval knight’s armour, something that we wanted to have hinted in some other elements as well, things like the elbow pad on the left arm and the shoulder design of the jacket.

Initially we didn’t have fur on the collar of the jacket, that came later from one of the talks with the director Vess, he suggested that having finer details for this part of the jacket would give us a lot of visual fidelity for the closeups where we see the face.

I decided to use Xgen and Maya to scatter the cards for the fur and did a quick grooming pass on them to add a bit of clumping and length variance before I began to map them onto a texture atlas. The final thing that helped to ground the fur a bit more in the shots and counter some of the repetition of the texture since there were too many hair cards for them to have unique uv’s was to add vertex paint that acted as an AO and color offset. 

After that we used Lasse’s attachment tool to attach the fur to the collar, the same as for the facial hair and eyelashes.

Preview of the gauntlet without the wires that we have in the final version

I made a textured model for the gauntlet device on Gawain’s left arm. The lower part along the knuckles is intended to be seen directly, while the upper part serves as foundation that would be covered by the same type of animated wires that tech lead Robert Cupisz was creating for the Boston character.

The idea that Vess had for this device was that it gives Gawain tactile feedback without him having to look at it, something that we see in one of the shots at the beginning of the film. It’s how Boston communicates with Gawain.

An example of the gauntlet with the wires from the finished film in Unity.

Screengrab from Substance Painter, where the character was textured.

The model was broken in different texture sets at 2k resolution and exported at 4k for some of the main objects, the rest remained at 2k for the final export. All of the textures were made by using the generators and tools in Substance Painter.

The briefcase

Paco: Gawain’s briefcase began with a concept blockout from Georgi Simeonov and from there I took over, refined the model and textured it in Substance Painter.

Screenshot of the textured briefcase - Concept by Georgi Simeonov

One of the more interesting features of the briefcase was the self-retracting strap that is seen when we first meet Gawain. There is also a small fan that suggests cooling functions of the case to go along with the temperature display on the side. 

An interesting fact about the temperature display is that I actually made it red in the beginning as it was in the original design by Georgi and we had it like that for almost the entire production, until one day when Vess was working on one of the very last shots where Gawain drops the briefcase, he realized that it’s a bit to reminiscent of a bomb that is about to be detonated. This was a nice catch as this was definitely not his original intention, also a lot of the design was based on cooling, having things like vents and fans, so if anything it should have been cold, so he suggested the colder cyan colored display and a temperature that is more extreme and interesting at the same time, close to the absolute zero, but not quite.

For the belt strap of the briefcase I initially made a regular opaque rubber that had slight reddish tint to it, but when Vess began working on the finals and lighting he made an experiment with the transparency feature of the Lit shader and we were pleasantly surprised how good it looked, so we ended up using this transparent silicone type of material.    Vess also made a texture that controlled the roughness of the transparency and tint of the material for it to be properly grounded and weathered, otherwise it would have been too artificial and clean looking.

The coin

For the coin that Gawain uses to open the portal, Vess needed something that is based on a realistic medieval design, something that would be a hint to the deeper lore behind the character’s past adventures and would make sense to have as an artefact.

Screenshot of Gawain about to open the portal in the finished film.

Learn more

Working on The Heretic was a great learning experience, we tackled many challenges during the production that we never had before and hopefully we would do it even better the next time around. We would like to thank all of the people that were involved with this production, it was a great experience working with all of you. 

We really hope that people will find some of the information in here helpful for their own work. See our page for The Heretic for additional blog posts and webinars.

If you have additional questions about the Digital Human Character package, sign up for our next live Unite Now session "Meet the Devs: Deep Dive into The Heretic assets" on June 17 at 9 am PDT.

June 10, 2020 in Technology | 22 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered