Search Unity

AR Foundation support for ARKit 3

June 6, 2019 in Engine & platform | 4 min. read
Share

Is this article helpful for you?

Thank you for your feedback!

Unity has been working closely with Apple throughout the development of ARKit 3, and we are excited to bring these new features to Unity developers. Now, we’ll take a deeper dive into the latest ARKit 3 functionality and share how to access it using AR Foundation 2.2 and Unity 2019.1 and later. Users of Unity 2018.4 can access the new features of ARKit 3 using AR Foundation 1.5.

With ARKit 3 and AR Foundation 2.2, we introduce several new features including:

  • Motion capture
  • People occlusion
  • Face tracking enhancement including multiple faces
  • Collaborative session
  • Other improvements

The set of features we discuss first makes interaction between rendered content and humans more realistic.

Motion capture

Key features of ARKit 3 focus on enhancing AR experiences by identifying people in the world. An exciting new feature of ARKit 3 is motion capture which provides AR Foundation apps with 2D (screen-space) or 3D (world-space) representation of humans recognized in the camera frame.

For 2D detection, humans are represented by a hierarchy of seventeen joints with screen-space coordinates. For 3D detection, humans are represented by a hierarchy of ninety-three joints with world-space transforms.

To express this entirely new functionality, AR Foundation adds the new Human Body Subsystem.

This feature is only available on certain, newer iOS devices with the A12 Bionic chip and the Apple Neural Engine (ANE). AR Foundation apps can query the Human Body Subsystem descriptor at runtime to determine whether the iOS device supports human pose estimation.

People Occlusion

In addition to motion capture, the new AR Foundation Human Body Subsystem provides apps with human stencil and depth segmentation images. The stencil segmentation image identifies, for each pixel, whether the pixel contains a person. The depth segmentation image consists of an estimated distance from the device for each pixel that correlates to a recognized human. Using these segmentation images together allows for rendered 3D content to be realistically occluded by real-world humans.

The stencil image by itself can be used to create visual effects such as outlines or tinting of people in the frame.

Please note that the people occlusion features are available only on iOS devices with the A12 Bionic chip and ANE.

Face tracking enhancements

ARKit 3 has expanded its support for face tracking on iPhone XS, iPhone XR, iPhone XS Max and the latest iPad Pros in a couple of significant ways.

First, the front-facing TrueDepth camera now recognizes up to three distinct faces during a face tracking session. You may specify the maximum number of faces to track simultaneously through the AR Foundation Face Subsystem.

Additionally, the most significant change related to face tracking is the ability to enable the use of the TrueDepth camera for face tracking during a session configured for world tracking. This enables experiences such as capturing the user's face pose from the front-facing camera and using it to drive the facial expressions of a character rendered in the environment seen through the rear-facing camera. Please note that this new face tracking mode is available only on iOS devices with the A12 Bionic chip and ANE.

Collaborative session

In ARKit 2, ARWorldMap was introduced as a means of sharing a snapshot of the environment with other users. ARKit 3 takes that a step further with collaborative session, allowing for multiple connected ARKit apps to continuously exchange their understanding of the environment. In AR Foundation, devices can share AR Reference Points in real time. The ARKit implementation of the Session Subsystem exposes the APIs to issue and consume these updates.

AR Foundation apps must implement their preferred networking technology to communicate the updates to each connected client. Check out the Unity Asset Store for various networking solutions for connected gaming.

Other improvements

ARKit 3 brings additional improvements to existing systems.

Both image tracking and object detection features include significant accuracy and performance improvements. With ARKit 3, devices detect up to 100 images at a time. The AR Foundation framework automatically enables these improvements.

Additionally, object detection is far more robust, being able to more reliably identify objects in complex environments. Finally, environment probes tracked by ARKit will now produce HDR cubemaps for environment textures of each probe. HDR environment textures may be disabled on the AR Foundation Environment Probe Subsystem.

Try all of these features in AR Foundation

As always, feel free to reach out to us on the Unity Handheld AR Forums if you have any questions.

Package documentation:

We are very excited to bring you these latest features of ARKit via AR Foundation. In the meantime, we’ll be adding more samples demonstrating these new features to the arfoundation-samples repository on GitHub.

We can't wait to see what you can make with it!

June 6, 2019 in Engine & platform | 4 min. read

Is this article helpful for you?

Thank you for your feedback!