Search Unity

AR Foundation support for ARKit 4 Depth

June 24, 2020 in Engine & platform | 2 min. read
Share

Is this article helpful for you?

Thank you for your feedback!

The new Apple iPad Pro comes equipped with a LiDAR scanner that provides enhanced scene understanding and real-world depth information to bring a new level of realism to augmented reality (AR) experiences. We have expanded AR Foundation to add these new features from ARKit 3.5 and ARKit 4 that incorporate this new depth data and scene geometry.

AR Foundation now includes the following new features:

  • Automatic environment occlusion
  • Depth images
  • Scene reconstruction

Automatic environment occlusion

The iPad Pro running ARKit 4 produces a depth image for each frame. Each pixel in the depth image specifies the scanned distance between the device and a real-world object.

AR Foundation 4.1 includes an AR Occlusion Manager that incorporates this depth information when rendering the background. When the camera background is rendered, the background renderer updates the depth buffer based on data from the scanned depth image. When virtual content is closer to the camera than real-world content, the virtual objects occlude the real world as the virtual scene is rendered. Likewise, when the virtual content is farther away, behind real-world objects, the virtual content will not be rendered to these pixels; the physical objects hide the virtual content.

This video demonstrates how automatic environment occlusion presents an improved visual integration of virtual and real-world content.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Depth images

Developers can also obtain raw data for additional CPU-based processing. AR Foundation 4.1 provides direct access to the pixel data comprising the depth image, which can be used for custom application behavior or input into computer vision algorithms.

Side-by-side with the color image (left) and depth image (right)

Scene reconstruction

Using the LiDAR sensor in Apple’s new iPad Pros, ARKit scene reconstruction scans the environment to create mesh geometry representing the real-world environment. Additionally, ARKit provides an optional classification of each triangle in the scanned mesh. The per-triangle classification identifies the type of surface corresponding to the triangle’s location in the real world.

Introduced with ARKit 3.5 and AR Foundation 4.0, scene reconstruction operates through the ARMeshManager. As the environment is scanned, the ARMeshManager constructs mesh geometry in the virtual scene. This mesh geometry can be used in several ways, including providing collision geometry for physics.

This video demonstrates both the mesh classification feature (the different colors represent different classified surface types) and the mesh used as collision geometry for physical interaction with virtual content.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Try all of these features in AR Foundation

The 4.1 versions of the AR Foundation and ARKit XR Plugin packages contain everything you need to get started and are compatible with Unity 2019 LTS and later. A sample demonstrating how to set up automatic occlusion is located in AR Foundation Samples on GitHub.

As always, feel free to reach out to us on the Unity Handheld AR forums if you have any questions.

We’re thrilled to bring you these latest features of ARKit via AR Foundation, and we can’t wait to see what you make next!

Learn how to get started.

June 24, 2020 in Engine & platform | 2 min. read

Is this article helpful for you?

Thank you for your feedback!