Mixing Sweet Beats in Unity 5.0

July 24, 2014 in Technology

One of the big areas of focus for Unity 5.0 has definitely been in audio. After a quiet period of feature development in this area, we have been working hard at taking audio to a first class feature within Unity.

To make this work we first had to take a step back and re-work a lot of the underlying asset pipeline and resource management of audio within Unity. We had to make our codec choices solid and ensure we had a framework that allowed you guys to have a lot of good quality sounds in your game. I’ll try and cover this in detail in a later post, but right now I want to talk about our first big audio feature offering in Unity 5.0, the AudioMixer.

Our First Move

Besides creating a solid foundation for future audio development in Unity, we wanted to give you guys a shiny new feature as our first big ‘we want you to make awesome audio’ offering. Something to show we mean business and want to empower you as best we can on the audio front.

There are a number of areas within the audio system we will be improving over the coming release cycles. Some of them are small and address the little issues that have been outstanding in Unity thus far, things you could consider as fixing the existing feature set. Some of them will be larger, like the ability for users to make amazing interactive sounds, immersive music systems and control to a high detail the mix of the audio soundscape.

Why the AudioMixer?

The question of why we chose the AudioMixer as the first big feature to push in Audio 5.0 can be answered pretty simply. Previously there was no way in which true sub mixing of audio was possible within Unity. Sounds could be played on an AudioSource where effects could be added as Components, from there all the sounds within the game were mixed together at the AudioListener where effects could be added to the entire soundscape.

We decided to address this with the AudioMixer, and while we were there we thought we would take it to the next level, incorporating many features you would find in established Digital Audio Workstation applications.

Sound Categories

CategoryExamples_edited

As many sound engineers know, it is super useful to be able to combine collections of sounds into categories and apply volume control and effects over the entire category in one place. Hooking up those volumes and effect parameters to game logic is effectively having a master control over an entire area of the game’s soundscape.

This control over the mix of the entire soundscape is super important! Its a fantastic way of controlling the mood and immersion of the audio mix. A good mix and music track can take players through the full range of emotions during gameplay, and create atmospheres that are not possible with graphic flair alone.

Mixing In Unity

This is the purpose of the AudioMixer. Its an asset that users can incorporate into their scenes that control the overall mix of all the sounds in the game. All the sounds playing in a scene can be routed into one or more AudioMixer which will categorise them and apply all sorts of modifications and effects to the mix of those sounds.

MixerWindow_edited

Each AudioMixer can have a hierarchy of categories defined, which in the case of the AudioMixer are called AudioGroups. You also view a lineup of these AudioGroups with a traditional mixing desk layout which many from the music and film industry with be used to.

DSP

SingleStripThe AudioMixer is, of course, more than just setting up mixing hierarchies. As one would expect, each AudioGroup can contain a bunch of different DSP audio effects that are applied sequentially as the signal passes through the AudioGroup.

Now we are getting somewhere!  Not only can you now create custom routing schemes and mixing hierarchies, but you can put all sorts of DSP goodies anywhere in the signal chain, allowing all sorts of effect options over your soundscapes. You can even add a dry path around effects to allow only a portion of the signal to be processed by it.

But what if you want more DSP control that just the inbuilt effects of Unity? Previously this was handled exclusively with the OnAudioFilterRead script callback, which allowed you to process audio samples directly in your scripts.

This is great for lightweight effects or prototyping your fancy filter ideas. Sometimes though, you want the ability to write native compiled effects for the best performance. Allowing you to write more heavy weight ideas, perhaps like your custom convolution reverb or multi band EQ.

ConvolutionReverb

Unity now also supports custom DSP plugin effects, with users having the ability to write their own native DSP for their game, or perhaps distributing their amazing effect ideas on the Asset Store for others to use. This opens up a whole world of possibilities, from writing your own synth engine to interfacing other audio applications like Pure Data. These custom DSP plugins can also request sidechain support and will be supplied sidechain data from anywhere else in the mix! Hawtness!

One of the cool things possible with the effect stack in an AudioGroup is that you can apply the groups attenuation anywhere in the stack. You can even boost the signal now as we allow volume levels up to +20dB. The inspector even has a integrated VU meter to show you exactly what is happening with the signal at the point of attenuation.

AtenuationMetering

When combined with non-linear DSP, Sends / Receives and our new Ducking insert (which will be explained later in this post), it becomes a super powerful way of controlling the flow of audio signal through a mix.

Mood Transitions

I talked earlier about controlling the mood of the game with the mix of the soundscape. This can be achieved with bringing in and out new stems of music or ambient sounds. Another common way to achieve this is to transition the state of the mix itself. Changing the volume of sections of the mix and transitioning to different parameter states of effects is an effective way of taking the mood of a player where you want them to go.

Inside all AudioMixers is the ability to define snapshots. Snapshots capture the state of all of the parameters in the AudioMixer. Everything from effect wet levels to AudioGroup pitch levels can be captured and transitioned between.

Snapshots

You can even create complex blend states between a whole bunch of Snapshots within your game, creating all sorts of possibilities and uses.

Imagine walking from an open field section of your map into a sinister cave and have the mix transition to highlight more subtle ambiences, bring in different instruments of your music ensemble and change reverb characteristics of the foley. Imagine setting this up without having to write a line of script code..

Divergent Signal

But the power of Snapshots becomes especially augmented when combined with Sends, Receives and Ducking.

Sends

SendDialog

Aside from traditional insert DSP effects available in Unity, you can also insert a “Send” anywhere into the mix. A Send effectively branches the audio signal wherever the Send is inserted, and within Sends you can choose how much of the signal you wish to branch off.

Things are now becoming even more interesting! Given that the level of signal you branch is part of the snapshot systems, you can start to see how you can incorporate signal flow changes with snapshot transitions. From here the potential setup possibilities start snowballing.

But where does this branched signal go? Currently there two options in Unity for a Send to target, Receives and Volume Ducking.

Receives

Receives are fairly straight forward processing units. They are inserts just like any other effect and they simply take all the branched audio from all the Sends that target them and mix it together, passing it off to the next effect in the AudioGroup.

Receives can of course be placed anywhere in a group of effects and the attenuation point of an AudioGroup, which gives huge flexibility on when the branched signal should be introduced to the mix.

Volume Ducking

Sends can also target Volume Ducking insert units. Much like Receives, these units can be placed anywhere in the mix alongside your other DSP effects.

DuckVolumeEffect

When a Send is targeting a Volume Ducking insert, it acts much like a side chain compressor setup, meaning you can side chain from anywhere in the mix and apply volume ducking anywhere else from it!

What does this mean for the layman? Imagine you and mixing your FPS game and you want to avalanche the player with the sound of gunfire and explosions. Fair enough, but what about when you walk up to an NPC in the field of battle and you need to hear the sagely words they utter? Volume Ducking allows you to dynamically lower the volume of entire sections of the mix (in this case, all the ordnance sounds) off other sections of the mix (the NPC talking). You simply have to Send from the AudioGroup containing all the NPC dialog to a Volume Ducking unit on the Ordnance AudioGroup.

You could even apply side chain compression to your musical setup dynamically, having the rest of your instruments compressed off the bass track.

The best thing is you can set this all up in the editor without a line of code!

Parting Words

Even though I have really only scratched the surface of possibilities the AudioMixer provides in this post, I hope its enough to sparks peoples interest in the possibilities of audio in Unity 5.0.

Unity 5.0 and beyond we really want to push the future of audio in games, giving you the suite of tools you need to make awesome sounding stuff!

Let us know your thoughts!

The Audio Team

PS: Wayne will talk about  New Audio Radness in Unity 5.0 at Unite 2014, August 22, 13:30-14:30. See you there!

Bonus Video: Beat mixing in Unity!

Comments (34)

Subscribe to comments
  1. Rush

    August 28, 2014 at 5:16 pm / 

    @Wayne
    Is it possible in unity5 new audio system that we can input audio from the microphone, find it’s waveform and transform an already existing audio file to that waveform. It’s something like Auto-Rap but instead of the microphone recorded waveform that is super-imposed on an existing waveform, we need to transform an already existing audio file to the microphone recorded waveform. Is there any way this can be achieved in Unity? Can this be achieved using the GetSpectrumData function or FFT algorithm? Any input is appreciated.

  2. Kevin Ng

    August 27, 2014 at 1:01 am / 

    Hi Wayne, was as the talk at #unite14 but there were too many questions to ask at the end… Will there be support for multiple AudioListener objects? My use case is split screen multiplayer.

    Cheers!

  3. James

    August 22, 2014 at 1:11 am / 

    Would this allow us to mix single gun sounds on the fly into one continuous track and stop the over flow that you get in Unity 4 when too much sounds are played at once?

  4. wayne

    August 20, 2014 at 10:09 pm / 

    @OGGY The hierarchy is not just a collection of stems. The stems, which come from the AudioSource are routed into the hierarchy.

    Also it is not called a return, because it doesn’t work like a traditional return of a classic mixer. It is literally a target sink for audio to be sent to, and it will be mixed in with the signal passing through the group. So we named it more appropriately.

    The Receive can be placed anywhere in the signal chain, giving full flexibility.

  5. Vonchor

    August 9, 2014 at 7:54 pm / 

    Are the DSP plugins using the FMOD plugin interface? That is, can I use FMOD DSP plugins with unity5?

  6. madam ozi

    August 3, 2014 at 11:37 pm / 

    oh wow , iwanna say give it to me jimmy yah give it to meeeeeee HARDDDDDD X3 :D

  7. Tadej

    July 30, 2014 at 9:22 pm / 

    What about audio recording Multiple channels ? Can we add each channel to AudioGroup and do processing on it ?

  8. AtomicJoe

    July 29, 2014 at 12:20 pm / 

    Is the new sound system retrocompatible with the old one?
    Will the Unity4 sound API still be available for compatibility sake?

    Currently I use some asset store sound analyzers and rely heavily on them.
    Will it break on Unity 5?

  9. Richard Fine

    July 27, 2014 at 1:14 am / 

    @DALLON FELDNER: yes :) you can still just get an explosion sample, stick it in an AudioSource, and tell that AudioSource just to spit out to the Master bus.

    But if you later find you’ve got the cash to hire a sound designer, they’ll be able to use all this stuff to make your explosion sound super-awesome.

  10. 9

    July 26, 2014 at 12:28 pm / 

    This is big leap in audio for Unity. Almost from the stone age to the space race. More powerful and intuitive for audio natives. Looking forward to get it into hands.

  11. Gavalakis Vaggelis

    July 26, 2014 at 3:28 am / 

    I don’t know much about the rest of v5 features but this one is what I’m totaly waiting for since I was the sneak peak video.
    Keep it up. Seems freaking awesome!

  12. mycall

    July 26, 2014 at 2:11 am / 

    I hope Unity or some 3rd party can use this with ambisonics for realtime 3D spherical sound shaping.

  13. Oggy

    July 26, 2014 at 2:01 am / 

    I love where all this is going, and it’s really cool to see Unity harnessing a proper mixing and routing system, but what’s with all the language?

    “AudioGroup” as a term for group channels in this new mixer context is clear but:
    Isn’t a “heirarchy” just a bunch of stems, and therefore a group channel?
    Why is it “receive” and not return, and if not technically a return path, why not just channel input? The article makes it sound like it could also be per plugin in the insert chain. Confusing.

    That said, I’m sure my queries will be resolved in due time, and I’m really looking forward to trying out an audio dev system that might approach the feel of standard audio workflow.

  14. Togrul

    July 25, 2014 at 6:45 am / 

    new audio system only for very big projekts.we need only webgl , small size builds and stability unity without erors and bugs . Have 100500 free progs for audio mix

  15. Dallon Feldner

    July 25, 2014 at 4:37 am / 

    I don’t speak audio, can somebody translate?

    More importantly, will it still be easy for a layperson like me to just say “I want this thing to sound like it blew up” without dealing with… whatever language this is?

  16. Vectrex

    July 25, 2014 at 3:15 am / 

    Do you still have MOD music support? They’re really nice for mobile since they’re tiny. Plus individual track controls would instantly allow interactive music with zero cost.

    Also, why do people describe data flow with a node graph, then in the actual program they don’t have a node graph? Modular audio wiring is pretty much standard for advanced audio configurations and it’s actually much easier/clearer AND more powerful.

  17. Shkarface Noori

    July 25, 2014 at 2:07 am / 

    very interesting, keep these tech blogs coming every-weekend ‘if you have time’ please, it really helps

  18. Chenix

    July 24, 2014 at 11:48 pm / 

    Very interesting. Looking forward.

  19. Michael La Manna

    July 24, 2014 at 11:33 pm / 

    Super excited about this :)

  20. Laurent

    July 24, 2014 at 6:51 pm / 

    This is beautiful.
    +1 on Alan’s suggestion, alongside some form of physically accurate acoustic modeling – this would allow us to make sound centric, VR friendly FPS and hire any sound engineer right out of school.

  21. Victoria

    July 24, 2014 at 5:41 pm / 

    Wonderful news! Audio is really imnportant for games, often no less than graphics, but only if it’s great, quality audio. Love Unity level of work with graphics and hope the quality of work with audio would be as good. Can’t wait to test it out for myself

  22. Alan Stagner

    July 24, 2014 at 5:01 pm / 

    Looks nice, but have you considered adding support for HDR audio mixing? This was employed in Battlefield: Bad Company 2, and the idea is you assign a “loudness” to each sound based on real world DB values, and the entire mix is adjusted to compensate for them (basically the audio version of HDR tonemapping)

  23. Geoff

    July 24, 2014 at 4:57 pm / 

    I’m with Anthony. I would love to have MIDI support with the ability to use note on/off events or volume level, etc. to activate a trigger.

  24. Frank

    July 24, 2014 at 4:36 pm / 

    Awesome! Have you also improved the video support while you were setting the new codec standards? And can the new audio mixer handle the audio from the movies as well?

  25. Anthony

    July 24, 2014 at 4:17 pm / 

    Sound Great. i like to know would you be able to load MIDI File and or sync sound base on time.

  26. Sergio

    July 24, 2014 at 2:54 pm / 

    We want better control over tracked files :)

  27. DrSalka

    July 24, 2014 at 1:18 pm / 

    - Any news on increased audio polyphony? (currently limited to 32 voices)
    - Does Audiosettings.SetDSPbuffer still works with the new audio system? (we need ultra low latency for our apps)
    - Does the new compressed audio format provide better CPU usage over mp3 audio when decompressed on the fly (iOS)?
    Thanks!

  28. Jashan

    July 24, 2014 at 1:15 pm / 

    Wow, this is some really great stuff that’s coming up in Unity 5! While quite improved when FMOD was introduced (I don’t even remember the name of the library it replaced back then ;-) ), Audio was severely lacking in Unity. Seems like Unity 5 will remedy this. Awesome – because audio is really important in games!!!

  29. wayne

    July 24, 2014 at 12:49 pm / 

    @gonarch – The new audio system has improved audio streaming and fixes a lot of the underlying streaming issues. For example, streaming from asset bundles works consistently now across all platforms.

    We have also added the ability to delay load the audio data or stream, so you can effectively choose when you want to create the stream handle and destroy it. This allows fine control over how you manage your AudioClip resources.

    So yes, you should see improved streaming support.

  30. Nobot

    July 24, 2014 at 12:48 pm / 

    Awesome, so…

    No graphics mode please…?
    If we want to use audio features only, we are happy at least having to stop the Update() graphics loop.

  31. Indy

    July 24, 2014 at 12:45 pm / 

    Out of my interest. I am still waiting for better video support.
    Are those “areas of focus for Unity 5.0″ coming from some developer surveys?

  32. Unity3dx

    July 24, 2014 at 12:45 pm / 

    Please add a “save as VSTi” option so that we can create the most amazing visual/audio plugins for the audio industry!

  33. Gonarch

    July 24, 2014 at 12:16 pm / 

    Would like to ask a thing about the new Audio system, not related to this post but well whatever. There will be any feature to be able to stream audio data from local disk at runtime?

    Yeah I know anbout www and try to get bytes of a file and convert the data to floats, but isn’t always precise and it allocate more memory than it should. Mind that this isn’t just about loading a long track at rntime and play it, but loading several sound files assign them as AudioClip without wasting memory.

  34. Terrell G.

    July 24, 2014 at 12:15 pm / 

    Looking good, this was needed.

Comments are closed.