Search Unity

The power of photogrammetry: Simulating the real world in VR

August 1, 2019 in Industry | 6 min. read
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

Get a behind-the-scenes look at a project made with Unity from Varjo, who used photogrammetry and dynamic lighting to create a realistic and lifelike environment in virtual reality (VR).

The applications of photogrammetry – the process of using multiple photos of real-world objects or spaces to author digital assets – run the gamut. Photogrammetry has not only gained traction in the gaming world, but also the industrial market. 

For instance, point clouds generated by photogrammetry have become integral to architecture, engineering, and construction (AEC) workflows. And across automotive, transportation, and manufacturing, capturing a physical prototype via photogrammetry and comparing it to its digital CAD model ensures vision matches reality.

To better simulate real-world environments and showcase the potential of photogrammetry for professional use, the Varjo team recently completed a photogrammetric scan of the largest cemetery in Japan and showed it as a digital twin in VR. We invited them to share in their own words how they tackled this ambitious project. 

Making of the Koyasan Okunoin Cemetery scene

With Varjo VR-1, shown above, exploring the finest details of buildings, construction sites or other spaces is for the first time possible in human-eye resolution VR. 20/20 resolution expands the use cases of photogrammetry VR for industrial use. 

To illustrate the potential of dynamic, human-eye resolution VR photogrammetry, we at Varjo created a dynamic demo of one of Japan’s holiest places, the Okunoin Cemetery at Mount Koya. In this article, we explain how it was done.

Capturing the photogrammetry location

This section was written by Jani Ylinen, 3D Photogrammetry Specialist at Varjo.

Photogrammetry starts with choosing the proper capture location or target object. Not all places or objects are suitable for photogrammetry capture. We chose to do a capture from an old cemetery in Mount Koya in Japan because we wanted to do something culturally significant in addition to having lots of details to explore in the demo. Since this was an outdoor capture, the conditions were very challenging to control. But here at  Varjo we like challenges. 

The key challenges in this capture were:

  1. Movement. The Okunoin Cemetery at Koyasan is big and ancient. There were surprisingly many tourists visiting it every day, and a camera on a tripod was a real people magnet. But when doing photogrammetry, the scene you’re capturing should be completely still and static without anything moving around. This can be problematic if you are capturing anything large because if the object itself is not moving, maybe the light source, the sun, is moving. If the shoot takes a few hours, the shadows may change a lot.
  2. Weather. When you do outdoor capture, it should be overcast weather. It, of course, cannot rain during the capture nor before the capture. Wet surfaces have a different look than dry ones, and the scene should look the same throughout the shoot.
  3. Ground. The cemetery floor in the chosen location was very difficult to capture, as it was filled with short pine branches and twigs that moved when walking around them.

When taking the photos of the photogrammetry scene, a general rule is that each picture should overlap with the neighboring picture at least 30% or more. The main goal is to take photos of  the  target from as many angles as possible and keep the images overlapping. 

The area captured in Koyasan was scanned similarly than if one would scan a room. For this scene, about 2,500 photographs were taken.

Building the dynamic 3D scene with Unity

This section was written by Juhani Karlsson, Senior 3D Artist at Varjo and a former Visual Effects Artist at Unity.  

Photogrammetry delivers realistic immersion but often its static lighting narrows down the realistic use cases. We wanted to use dynamic lighting to simulate a realistic environment. Unity provides a great platform for constructing and rendering highly detailed scenes, which made it easy to automate the workflow.

We also used the excellent De-Lighting tool and the Unity Asset Store to help us fill the gaps when needed. Some trees and stones from Unity’s fantastic Book Of The Dead assets were also used. 

While shooting the site, file transfers were constantly made so we could save time in the 3D construction. First, we used a software called Reality Capture to create a 3D scene of the photographs.

Mesh processing and UVs 

The 3D scene was exported from Reality Capture with a single 10 million polygon mesh with a set of 98 x 8k textures.

In Houdini,  the  mesh was run through Voronoi Fracture that splits the meshes into  smaller and more manageable-sized pieces. Different levels of detail (LOD) were then  generated with shared UVs. This was done to avoid texture popping between LOD.

That way,  the textures were small enough for Unity to chew and we could get the Umbra occlusion culling working. It was also lighter to generate UVs when the pieces were smaller.

Shader was created to bake different textures. Unity’s De-Lighting tool requires at least albedo, ambient occlusion, normal, bent normal, and position map. Most frame buffers are straightforward to bake out of the box but bent normals are not so obvious. Luckily, bent normals are the direction of missed occlusion rays, and there is a simple VEX function called occlusion() that basically outputs bent normals.

De-Lighting

We created a Python script to automatically run the textures through the batch script provided by the Unity De-Lighting tool.

If the scan has a lot of color variation, the De-Lighting has trouble estimating the environment probe. Therefore, we decided on a mixed approach where we mixed between automatic De-Lighting and traditional image-based lighting shadow removal.

A Unity Asset post-processing script was made to import the processed models. It handled the material creation and texture assignment. A total of 128 of 4k textures  were processed,  baked, and de-lighted.

Before and after De-Lighting

Varjo VR-1 and Unity – Easy integration

Once the scene was imported, it was just a matter of dragging the VarjoUser Prefab  to  the scene. Instantly, the scene was viewable with VR-1, and we could start tweaking it to match our needs.

The Unity Asset Enviro was used for the daylight-night cycle, and the real- time global illumination was baked to the scene. The generated mesh UVs were used for the global illumination to avoid long preprocessing times. The settings were set so that the lightmapper would do minimal work on the UVs. This can be done by enabling UV optimization in the meshes and adjusting settings.

---

Our thanks to Varjo for sharing this guest post with our community; learn more about photogrammetry in Unity. Varjo will be exhibiting and presenting at Unite Copenhagen.

 

Register for Unite Copenhagen today 

August 1, 2019 in Industry | 6 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered