Search Unity

Unity has been working closely with Apple throughout the development of ARKit 3, and we are excited to bring these new features to Unity developers. Now, we’ll take a deeper dive into the latest ARKit 3 functionality and share how to access it using AR Foundation 2.2 and Unity 2019.1 and later. Users of Unity 2018.4 can access the new features of ARKit 3 using AR Foundation 1.5.

With ARKit 3 and AR Foundation 2.2, we introduce several new features including:

  • Motion capture
  • People occlusion
  • Face tracking enhancement including multiple faces
  • Collaborative session
  • Other improvements

The set of features we discuss first makes interaction between rendered content and humans more realistic.

Motion capture

Key features of ARKit 3 focus on enhancing AR experiences by identifying people in the world. An exciting new feature of ARKit 3 is motion capture which provides AR Foundation apps with 2D (screen-space) or 3D (world-space) representation of humans recognized in the camera frame.

For 2D detection, humans are represented by a hierarchy of seventeen joints with screen-space coordinates. For 3D detection, humans are represented by a hierarchy of ninety-three joints with world-space transforms.

To express this entirely new functionality, AR Foundation adds the new Human Body Subsystem.

This feature is only available on certain, newer iOS devices with the A12 Bionic chip and the Apple Neural Engine (ANE). AR Foundation apps can query the Human Body Subsystem descriptor at runtime to determine whether the iOS device supports human pose estimation.

People Occlusion

In addition to motion capture, the new AR Foundation Human Body Subsystem provides apps with human stencil and depth segmentation images. The stencil segmentation image identifies, for each pixel, whether the pixel contains a person. The depth segmentation image consists of an estimated distance from the device for each pixel that correlates to a recognized human. Using these segmentation images together allows for rendered 3D content to be realistically occluded by real-world humans.

The stencil image by itself can be used to create visual effects such as outlines or tinting of people in the frame.

Please note that the people occlusion features are available only on iOS devices with the A12 Bionic chip and ANE.

Face tracking enhancements

ARKit 3 has expanded its support for face tracking on iPhone XS, iPhone XR, iPhone XS Max and the latest iPad Pros in a couple of significant ways.

First, the front-facing TrueDepth camera now recognizes up to three distinct faces during a face tracking session. You may specify the maximum number of faces to track simultaneously through the AR Foundation Face Subsystem.

Additionally, the most significant change related to face tracking is the ability to enable the use of the TrueDepth camera for face tracking during a session configured for world tracking. This enables experiences such as capturing the user’s face pose from the front-facing camera and using it to drive the facial expressions of a character rendered in the environment seen through the rear-facing camera. Please note that this new face tracking mode is available only on iOS devices with the A12 Bionic chip and ANE.

Collaborative session

In ARKit 2, ARWorldMap was introduced as a means of sharing a snapshot of the environment with other users. ARKit 3 takes that a step further with collaborative session, allowing for multiple connected ARKit apps to continuously exchange their understanding of the environment. In AR Foundation, devices can share AR Reference Points in real time. The ARKit implementation of the Session Subsystem exposes the APIs to issue and consume these updates.

AR Foundation apps must implement their preferred networking technology to communicate the updates to each connected client. Check out the Unity Asset Store for various networking solutions for connected gaming.

Other improvements

ARKit 3 brings additional improvements to existing systems.

Both image tracking and object detection features include significant accuracy and performance improvements. With ARKit 3, devices detect up to 100 images at a time. The AR Foundation framework automatically enables these improvements.

Additionally, object detection is far more robust, being able to more reliably identify objects in complex environments. Finally, environment probes tracked by ARKit will now produce HDR cubemaps for environment textures of each probe. HDR environment textures may be disabled on the AR Foundation Environment Probe Subsystem.

Try all of these features in AR Foundation

As always, feel free to reach out to us on the Unity Handheld AR Forums if you have any questions.

Package documentation:

We are very excited to bring you these latest features of ARKit via AR Foundation. In the meantime, we’ll be adding more samples demonstrating these new features to the arfoundation-samples repository on GitHub.

We can’t wait to see what you can make with it!

Добавить комментарий

Вы можете использовать эти теги и атрибуты HTML: <a href=""> <b> <code> <pre>

  1. Undefined symbols for architecture arm64:
    «_OBJC_CLASS_$_ARCollaborationData», referenced from:
    objc-class-ref in UnityARKit.a(ARKitXRSessionProvider.o)
    «_OBJC_CLASS_$_ARSkeletonDefinition», referenced from:
    objc-class-ref in UnityARKit.a(ARKitXRHumanBodyProvider.o)
    «_OBJC_CLASS_$_ARBodyAnchor», referenced from:
    objc-class-ref in UnityARKit.a(ARKitXRHumanBodyProvider.o)
    «_OBJC_CLASS_$_ARBodyTrackingConfiguration», referenced from:
    objc-class-ref in UnityARKit.a(ARKitXRHumanBodyProvider.o)
    «_OBJC_CLASS_$_ARMatteGenerator», referenced from:
    objc-class-ref in UnityARKit.a(ARKitXRHumanBodyProvider.o)
    «___isPlatformVersionAtLeast», referenced from:
    _UnityARKit_Camera_AcquireConfigurations in UnityARKit.a(ARKitXRCameraProvider.o)
    _UnityARKit_Camera_TryGetCurrentConfiguration in UnityARKit.a(ARKitXRCameraProvider.o)
    _UnityARKit_Camera_TrySetCurrentConfiguration in UnityARKit.a(ARKitXRCameraProvider.o)
    (anonymous namespace)::ARKitXRCameraProvider::ResetLocalConfigurationState() in UnityARKit.a(ARKitXRCameraProvider.o)
    (anonymous namespace)::ARKitXRCameraProvider::HandleARKitEvent(UnityARKitEvent, void*, int) in UnityARKit.a(ARKitXRCameraProvider.o)
    _UnityARKit_EnvironmentProbeProvider_Construct in UnityARKit.a(ARKitXREnvironmentProbeWrapper.o)
    _UnityARKit_EnvironmentProbeProvider_Destruct in UnityARKit.a(ARKitXREnvironmentProbeWrapper.o)

    ld: symbol(s) not found for architecture arm64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)

  2. Hi, when you say «only on iOS devices with the A12 Bionic chip and ANE» does this include the iPad Pro models with the A12X?

    1. My understanding is that iPad Pro with the A12X running iOS 13 should support these new ARKit 3 features.

      1. I read in an article today that it needs the truedepth camera, ruling out the iPad mini 2019 and iPad air 2019 with the same chips.

        So far it seems only these devices will be supported:
        iPhone XR
        iPhone XS of XS Max
        iPad Pro 2018 (11-inch of 12,9-inch)

        1. William Todd Stinson

          Июнь 7, 2019 в 7:03 пп

          The motion capture and people occlusion features do not rely on the TrueDepth camera. These two features use the rear-facing camera.

      2. Hi Whitman, do you believe iPad Air 2019 would support full body motion tracking? I can’t seem to find a confirmation in light of the press articles that implied/stated otherwise. Thanks!

  3. That means we can use ARKit’s feature of Face emotions, can’t wait to test.

  4. Any news on improvements to the post processing stack to match RealityKit level? (Motion blur, depth of field, grain and ray-traced soft shadows)?