Search Unity

Some of you might be wondering what a Software Test Engineer actually does. Testing is a creative process, where the most important deliverables are… bugs!  Assessing each bug involves deep consideration of the systems and features in the Unity editor. This blogpost is my attempt to give you a small peek into the processes I went through during the development of the Audio Mixer.

AudioMixer

A testing challenge

Audio is a difficult domain and area to test, both manually and in automation, for two main reasons:

  • People with a wide variety of backgrounds and degrees of knowledge of the  audio domain will be using Unity’s new Audio Mixer, for example, audio programmers building their own customized systems and sound artists or composers who aren’t necessarily game audio integration or game production workflow experts.

  • The Unity engine deploys to multiple platforms, from mobiles to consoles, that all have different audio requirements.

Challenges for audio designers in Unity 4

Having an academic background in Musicology and a professional background in the computer games industry as an audio designer, I am familiar with some of our users’ backgrounds. So, as a Tester, I have a fair idea of audio designers/composers references to other audio tools and workflows when working in Unity.

One of the challenges we face in Unity 4 is that audio designers/composers who do not have a programming background are dependent on an audio programmer to create customized components and tools so they can tweak sounds in the scene at runtime. Or, they have to invest in a third party audio middleware solution that works with Unity.

With the Audio Mixer feature and its sub-features such as Snapshots/DSP Plugins/Duck Volume (just to mention a few) the Unity 5 editor introduces new concepts for audio designers/composers. Audio designers can now work productively directly in the editor instead of being forced to use additional software.

Early development and high level ideas bouncing back and forth

During early development and feature testing, I had discussions with developers about the role of the audio designer. What are the biggest challenges and what kind of DAW’s (Digital Audio Workstations such as Logic, Nuendo, Ableton Live and etc.) do they work with? Being an audio designer myself, what are my favourite tools and why?

We agreed that at one end of the scale we have the persona of the audio designer who is used to doing field/studio recordings, and at the other end we have the audio designer who’s familiar with programming and/or builds his or her own synths and patches in a visual programming language such as Pure Data/Max. To accommodate this, the Audio Mixer feature had to:

  • be easy to use for all types of audio designers at multiple levels
  • scale up quickly during production, whether you are working on a small indie production or an AAA console game
  • be “open” in the sense that the audio designer can customize and be more experimental in his/her approach to game audio
  • give the audio designer the ability to tweak sounds at runtime

As a tester, I pitched into the discussions with my domain knowledge – also providing examples of other types of non-commercial audio middleware plugins developed for Unity. We had discussions about the biggest barriers that I had experienced as an audio designer, what kind of DAWs I had worked with and what features or workflows of those DAWs I liked and why. The questions that arose when I started testing the Audio Mixer feature in the pre-alpha state include:

  • The user interface has references to an analog mixer, which is typical for DAW’s, but would the audio designer be confused when finding out that the user interface (which looks like an audio channel) can in fact be a group of audio sources and not a single instrument?
  • Would the audio designer expect to meet exactly the same workflow as in DAWs?
  • How should we introduce new concepts in the editor such as Snapshots? Would it make sense for an audio designer who has not seen any documentation relating to this? Would even the word “Snapshots” make sense? How do people refer to these features in other types of audio middleware?

Mind mapping helps me get an overview of the way a user can interact with a system’s UI in order to achieve a goal. Below is a mind map of the Audio area in Unity, which the Audio Mixer is part of. The mind map is based on James Bach’s heuristic testing strategy model.

Oracles, Oracles, Oracles

One of the testing methods that I learnt more about during a Black Box Testing course was the use of “oracle heuristics”. A brief definition:

“The point of an oracle is to help you decide whether a product’s behavior is inappropriate and, if so, to help you explain persuasively to someone else why they should consider it inappropriate.”

When performing manual testing on the Audio Mixer, I was very much aware that my primary oracle heuristic was “Comparable Products”. As a user of Ableton Live 9, I was using the user interface and workflows from this DAW as a reference when testing. But I was also aware that there are many other commercial DAWs on the market and that comparing to only one of them was somewhat limiting or biased.

AbletonLive

Further on in the development phase where more features were added and the user interface improved, another oracle heuristic started to sneak into my mind: simplicity.

Some years ago, during a Hack Week for Unity developers, the overall theme for the Hack Week was simplicity. One way of interpreting this theme, was to think of ways in which we could improve the workflows in the editor. It could be simple additions to the editor – for instance, one of the features that came out of that Hack Week was the Add Component button in the Game Object Inspector.

With this in mind, I started to think about ways the Audio Mixer and its subfeatures could make life easier for audio designers. I also asked myself how we should define what simplicity is in the context of the Audio Mixer?

 Do testers and developers think of simplicity in the same way?

  • Is simplicity always about how to make the user achieve his goal quickly?
  • Could simplicity mean reducing choices/parameters in the user interface? Is that always a good idea?
  • Can simplicity be visualized and/or measured? And how can we test and measure for simplicity in the workflow?

As a result one parameter I decided to measure was how many steps a user had to go through to be able to hear a sound in a scene. I created the flowchart below, which is based on a user scenario where the user creates an Audio Mixer, routes the Audio Source through the Mixer and hears a sound when playing the scene. With my “Comparable Product” Ableton Live, I go through 2 steps:

1. Drag a sample to a mixer channel

2. Hit Play – and I get an instant feedback when hearing the sound.

As I started to draw the diagram, I realized that with the pre-alpha version of the Audio Mixer I had to go through 6 steps before I got an audio feedback on my interaction in the editor. Is this desirable for an audio designer? With the diagram in hand, I could ask the developers for more information.

AudioMixer_CreateAudioGroupWorking with oracle heuristics and visual tools, revealed further questions that helped me investigate the feature in depth. Some questions could not be answered though. This was partly because after a long period of testing a feature, you:

  • get used to how it works and stop being critical about the flaws in the system.
  • have read the preliminary manual documentation for the feature and have accepted how the system works.
  • have certain heuristics in mind when testing which can make you biased

To avoid my own biases and to investigate usability issues further, I teamed up with User Experience Designer Stine Munkesø Kjærbøll and ran three rounds of usability testing sessions on audio designers/composers from the local game industry in Copenhagen. Some of the exciting things that we found that made a difference for our users were:

  • The “Edit in Playmode” button issue. This is a very useful feature for audio designers, as it enables them to tweak sounds at runtime. The only issue was that they had a hard time finding the button!
  • Mixers, Audio Groups and Snapshots workflows. The Audio Mixer user interface resembles the user interface in DAWs. Whilst testing it was important to us to form an opinion as to whether concepts such as Audio Groups and Snapshots and their associated workflows would be easy for users to understand.
  • Ducking Volume. This is an audio mixing concept used in computer games. To take an example, in a 3rd person shooter battlefield with explosions and “action music,” if an NPC comes up to the player’s character and starts talking, the player character needs to be able to hear what the NPC is saying. To allow this all the surrounding sounds and music in the scene are compressed and decreased in volume. The sounds will then slowly fade back in when the player’s character stops talking to the NPC. Not every audio designer works on 3D shooting games – would they understand the concept of ducking from the user interface of the Receive Unit on an Audio Group?

In an upcoming blog post the User Experience Team will elaborate further and take you through Usability Testing on the Audio Mixer feature. Stay tuned!

Comments are closed.

  1. re: my last comment
    The mixing desk channels can easily work WITH a modular interface. You would hook up the modules with node interface, then assign certain wires volume to a channel for easy adjustment.
    In fact, you could even hook it as you do now and have the option to use a node editor to edit the same information.
    It so easy to set up complex groups and visualise it with nodes. With an old mixing desk analogy it’s really confusing (although you guys have done a better job with the visible hierarchy nesting showing dependant channels etc)

    1. Hi Cameron and thanks for your comment! It is actually possible to route between multiple mixers – in Mixer window, where you can drag, for instance “Mixer A” to the desired output Audio Group in “Mixer B”. The kind of system that you are referring to, would require a node based, visual scripting tool, which is currently not possible in Unity.

  2. Neat. I notice now you have a visual wire to help see connected groups/fx. This is nice.
    I prefer a full wired node interface though. Especially for games, because remember, ableton etc are music making programs. I think the requirements for games suits a modular interface better.
    Like these simplified ones (max/msp is over the top i think)
    Buzz
    http://ukuphambana.com/wp3/wp-content/gallery/archive/59745_146987768676226_100000950103116_211780_3330204_n.jpg

    Or FLStudio’s ‘Patcher’
    http://cdn.mos.musicradar.com/images/Computer%20Music/issue%20164/fl-studio-patch-630-80.jpg

  3. Ice, thx for sharing :)

    I definitely need to play some more with the new mixer