Search Unity

Google’s new Resonance Audio SDK for Unity lets you render hundreds of simultaneous 3D sound sources in the highest fidelity for your XR, 3D and 360 video projects on Android, iOS, Windows, MacOS, and Linux. This SDK includes two Unity-exclusive features: Ambisonic Soundfield Recording and Geometric Reverb Baking.

With Resonance Audio, some of the biggest audio challenges for immersive and interactive experiences have been solved. It enables you to deliver realistic, impactful sound on multiple platforms – including mobile – without compromising audio quality or running out of CPU resources. As well, with the Resonance Audio SDK for Unity you can author ambisonic clips, combine ambisonic and spatialized clips, generate realistic reverb based on scene geometry and acoustic surface materials, and deliver a host of other impressive audio effects.

Hear the power of Resonance Audio in action

Audio Factory is a VR experience that showcases the features and capabilities of the Resonance Audio SDK. Experience spatial audio in this exhilarating clip from Audio Factory.

Resonance Audio in Google’s Audio Factory VR app. App available for free on Daydream and SteamVR.

The technology behind Resonance Audio is already powering the top Made with Unity VR apps on Daydream and also all YouTube 360 videos where spatial audio is present.

Eclipse: Edge of Light

Virtual Virtual Reality

Fantastic Beasts and Where to Find Them: VR

The Turning Forest

It’s time to bring high-fidelity, spatial audio that scales to your Unity projects. Learn more about Resonance Audio’s features and check out the resources below to help you get up and running.

Getting Started with Resonance Audio

Here’s how to get started with Resonance Audio in your project:

  1. Make sure you have Unity 2017.1 or later installed. If not, install Unity.
  2. Download the Resonance Audio SDK for Unity.
  3. Learn more by visiting Google’s Resonance Audio and Developer Guides for Unity.
  4. Join the discussion in our Unity Forum.

Resonance Audio: Overview and Features

How it works

Audio spatializers are typically CPU-intensive and the cost increases with each audio source. In contrast, Resonance Audio’s spatializer is efficient and it scales well as audio sources are added to a scene. Resonance Audio accomplishes this through a unique design, where each audio source clip is converted into ambisonic format.

This format maintains enough spatial information to be able to effectively spatialize the source later. All sources are then mixed together. The “expensive” spatialization step is then applied just once to the mix of all sources. This allows hundreds of simultaneous high-fidelity sources to be handled per CPU core, even on mobile devices.

Ambisonic Decoder: Combining ambisonic clips with spatialized clips

Resonance Audio includes an Ambisonic Decoder plugin. With it, developers can create rich audio experiences using both ambisonic clips and more traditional audio clips, due to Resonance Audio’s ambisonics support. First-order ambisonics are mixed into the global internal ambisonic representation, which is already generated for all spatialized audio sources. The spatialization step is then applied just once to the mix of all audio sources.

Ambisonic Soundfield Recording: Authoring clips in the Unity Editor

Ambisonics are an exciting advance for XR (AR/VR) audio because they project sounds above and below the listener as well as on the horizontal plane. Think of them as the audio equivalent of 360 videos, where XR ambiences rotate correctly as you turn your head, and perform in other interesting and creative ways during an XR experience.

But one typical issue with using ambisonics has been that these clips were difficult to record and author. Now with the Unity-exclusive Ambisonic Soundfield recording tool in the Resonance Audio SDK, sound designers can use Unity to author ambisonic clips. This feature allows you to place many ambient audio sources in a scene, and then bake out one ambisonic clip based on the mix of the original clips.

The newly created ambisonic clip is much “cheaper” to play back than several audio sources. It also retains enough relative positional information to realistically simulate where each sound originated and have those sounds rotate correctly as you turn your head in an XR experience.

Geometric Reverb Baking: Calculating audio reflections and reverb

Also exclusive to Unity, this feature lets developers generate realistic reverb based on the geometry and associated acoustic surface materials in a scene. Resonance Audio also supports direct sound propagation, occlusion, near-field effects, sound source spread, and directivity-shaping for sound sources and listeners.

Resonance Audio’s sound directivity customization

Resonance Audio’s Geometric Reverb Baking in Unity

Environmental audio, or modeling how the environment affects authored sounds, has been another ongoing challenge. Initially, environmental modeling was simplified and often used the “shoebox” model, which basically assumed there was a rectangular room around the listener and audio sources. Now, with the Resonance Audio SDK, you can use actual scene geometry to model all of these environmental effects more realistically.

Cross-platform support: Build once, deploy everywhere

The Resonance Audio SDK for Unity supports development for Android, iOS, Windows, MacOS, and Linux platforms.

Keep us in the audio loop!

It’s time to bring high-fidelity Resonance Audio to your Unity projects to wow your users with truly immersive and realistic sound and effects. So get started now by installing the Resonance Audio SDK for Unity and joining the discussion in the Unity Forum. We can’t wait to hear from you – and hear your results!

Comentários encerrados.

  1. The article mentions sound propagation but I can’t find anything about that on the Resonance Audio website?

  2. Hi! Thank you for the support to Google Spatializer!

    I suppose this does not work on PS4VR, doesn’t it?

  3. Nicholas Ventimiglia

    novembro 7, 2017 às 4:47 pm

    Does this solve Android audio lag? This is still the largest problem with audio on Android.

    1. Resonance Audio does not add any additional latency into the Android audio pipeline. However, it also doesn’t improve the existing latency in Android.

  4. Great news! Has this been tested with GoogleVR and ARCore? Ideally, I would want to clear GVRAudio from my project and use this instead. Has this been tested yet by any chance? Cheers!

    1. Yes! The Resonance spatializer and ambisonic decoder should work well with Google VR and ARCore. Also, Google’s documentation explains exactly what you need to do to upgrade from the GVR spatializer and components to the Resonance versions.

  5. And consoles…?
    And non VR platforms…?
    Blending between them in each area, ie normal 3rd person game with rich sound, no VR?

    Just want to confirm.

    1. pc, mac, ios, android, the google sdk supports web but not the unity plugin yet

  6. What difference between this and Steam audio?

    1. From reading about it, It looks like it supports a lot more platforms, and seems to be a good bit more computationally efficient.

    2. Steam Audio is another exciting piece of tech in this area of spatialization and environmental audio. Resonance Audio supports both iOS and Android, so I believe its mobile platform support is currently more complete. I think the ambisonic soundfield recording feature is also unique to Resonance Audio. Both Valve and Google have great audio teams and are continuing to work hard in this space, so I highly recommend reading about and trying out the tech from both companies.

  7. Good to see spatial audio getting some proper integration and support in the editor !
    Might as well spare me some supporting GoogleVR in my asset going forward.

  8. Still no nested prefabs, this many years on.

    1. Thank you for this comment that is totally related to the article.