Search Unity

Time for the second installment of what testing Unity is like (first one here). As mentioned in the previous post we have STEs (Software Test Engineers) working close with development teams on building high quality features, and SDETs (Software Development Engineers in Test) working with STEs and development teams promoting testability and making tools & frameworks for testing Unity. In this post I want to go more into details about the specific work we do and the tools we use.

Manual testing

The primary tool for any structured manual testing effort is the test case management system. Often the tool of choice is a spreadsheet, but once you get to more than a few people, you need a real tool for the job. After trying out a lot of different products, we have chosen a system called QMetry. Like I mentioned in the previous blogpost, the manual test has a lot to do with figuring out how to get coverage on an area and reporting to the development team on status. It’s all about visibility and feedback and using a tool like QMetry helps us track both. From within one place, we can organize the testing needed, prepare a new cycle of testing, track defects into our bugtracker and make reports for all to see how each part of Unity is being tested and what types of bugs come in.

Besides the very structured testing, we also use a good amount of time doing exploratory testing to just find bugs in new features. It’s an artform in its own rights to attack an application and find bugs. Not everyone possesses the ability to do this effectively, but it is of incredible value. The most extreme form of exploratory testing is a bug bash, where everyone in the development pair up and try to find as many bugs as possible. The team finding the best bug and the team finding most bugs are rewarded with a very edible and/or drinkable award.

It’s also worth mentioning that we have full integration between our bugtracker (FogBugz) and our source control system (Mercurial) meaning that we have traceability from bug-report to fix in sourcecode. This enables us to always know when and where fixes can be verified in different versions of Unity. We also build continuously and test on daily builds.


Much of the testing we do today is automated. For regression testing we have different frameworks which can help us make different types of test cases. For small, isolated tests of the Unity Runtime we have the Runtime Test Framework. With this framework we can produce very small, very fast to execute test cases which target a very specific part of Unity. It is capable of running the test cases on all of our targets, so one test case will be able verify the working on all supported platforms. This is of extreme value, since it gives very good coverage with little effort, and it is only possible because all platforms are required to implement an interface which enables us to communicate the same way to all of them. Further, the framework works like a sandbox so it is very easy to use and hard to do stupid things in.

For larger tests we have the Integration Test Framework. With this we can make a larger scope in the test case, usually taking longer to execute and has a broader coverage. One example could be: Build an assetbundle, setup a webserver and deploy the assetbundle here, start Unity, build a player, download the assetbundle etc. The Integration Test Framework is not born with multiple target execution, so some bit of extra work needs to be done to utilize a test case on different platforms. Also, the framework is not sandboxed like the Runtime Test Framework, so it is somewhat easier to do things such as making processes hang, forgetting to release shared resources etc. This is the price for greater freedom and flexibility.

On the highest level we have our regression rig. Very simply put, this thing can run a set of pre-recorded webplayer games on any given changeset and then compare the output (screenshots, audio, logs) to any other changeset, effectively giving us a picture close to realtime of how much we broke existing games with the latest codebase. On top of this, the rig has a bisect feature that can pinpoint which exact changeset caused a regression by tracking down the change automatically. Obviously there’s a whole lot more to the details about this rig is working, but maybe that will come in a later post.

The majority of test cases we have today have been written by the developers of each feature, which is a very good sign. As we staff up in QA there will be more test code written directly by developers in test, they will also make sure the entire suites are working well together and that tests are written in the most suitable framework.


Tools are extremely important for an efficient development department and we have small team of 3 developers in QA working primarily with tools and frameworks.

We have what we call the Callstack Analyzer. This tool can extract the callstacks from the crash reports that the community files. Every time Unity crashes, you get a dialog with a request to file in a bit of information about what you were doing and then send us the project and the callstack. This callstack is then analyzed, broken into blocks which we have previously identified as being either Unity, Mono, windows OS, Mac OS etc etc. Then we match each block to all the other callstacks we have and we can identify those where we have duplicates on specific parts of the stack and start investigating them. This is where the additional information you give us comes into play, since the callstack itself rarely gives us a direction to look at; a callstack with 20 duplicates and 1 reproducible bug report is a great catch. So please press that button on a crash, even if you don’t want to write anything; even the callstack itself can be of value to us.

We have some additional tools for e.g. for processing bugs, which resides on our servers. Tools for tracking our test projects, making code coverage analysis, making reports on cyclomatic complexity etc, are a part of the toolset we can bring in regularly to get a complete picture of how Unity is doing in the current development cycle.

More to come…

I promised you the bug reporter in post 2, but I’ll leave it for the next installment of Testing Unity. In the meantime, go check out our job openings at

16 replies on “Testing Unity part 2”

A lot of Developers now days heading to what unity had promised , i by myself are one of those but any there are some serious bugs which really coming in the way that really give the developers the limits , in my case is FBX importer which crashes at huge animation import , my case are still open in the bug report

It is also important to note that Unity must also start testing these things for the next release of Unity 3D:

— Destructible Geometry.
— Interactive Fluid Surfaces.
— Material Instancing.
— Visual Script Editor.
— Matinee style cinematic editor.
— Normal Mapping for terrain.
— Multi-texturing Shader.
— Realtime Reflections (so without a render texture).
— Voxel Clouds.
— Underwater god rays and caustics.
— Realtime Global illumination.
— More realistic glass shader.
— etc,etc,…

oh sorry, I meant to version 3.5.3, I wonder if it is already set a date, I’ve had performance issues with the 3.5.x versions in a project for android

@Richard: It is not going longer into the tool chain. We have devs working on both windows and Mac platforms, so it would have to support multiple tools, but it’s a good idea. The tool has caught us many very hard to reproduce bugs in our serialization engine, Shuriken and Mono already. Keep sending us crash reports… :-)

The callstack analyzer is cool; I was just thinking the other day that it’d be something you guys should have. How far through are the results pushed? Can your developers actually be editing a source file in VS and see lines highlighted in red because crashed happened there, sorta thing?

Very cool stuff. Wonder how aras, joachim and hegalsen were coping in the early days. We’d like to see more posts like this from the testing department. And we know David and the crew are busy with other stuff, but you guys should at least blog about what you’ve been up to (like the good old days). Oh and thanks for the free basic license! You guys made my year.

Cool post. I am more of a designer, but I love reading about how you guys try to make your code base as rock solid as possible. It is amazing how many tools are out there to make testing more approachable.

The only part I was a little confused with in the post was the acronyms “STE” and “SDET”. I guess I could look at the older post where it is probably explained. Adding a link to “previous post” to link to the first part would be nice. I couldn’t find a link on this page though.

Keep up the good work guys(and gals)!

Comments are closed.