Search Unity

A look inside: Core Foundation team and improving Unity’s testing practices

, 七月 21, 2017

Pictured above: Me (right) fighting a developer (left) about our testing practices and standards.

As a Software Developer Engineer in Test (SDET) on the Core Foundation team, I would like to share with you our findings from a testing workshop we held recently. Core Foundation is composed of industry veterans with autonomy to focus on the highest-impact changes that will shape Unity’s codebase and our developer workflows. Among our goals are reducing iteration time, increasing quality, and reducing waste, but we discovered that being a living example of best practices for our peers is something we are really passionate about. That’s why we’ve invested in things like Bulletproof Monday (where we spend 20% of our time on personal projects each Monday); writing guidelines and being part of the code convention discussion; or organizing a 5-minute workout every other hour at the office!

Improving testing practices

At Unity, we work with amazingly skillful colleagues, but many come from an industry with testing practices that, in my opinion, are not best-suited for running software as a service. Test automation benefits from having solid guidelines as much as any other piece of code, so we decided it was time well spent for the team SDETs to prepare a short workshop, sharing what we find in early (and mainly automated) testing, and discussing where can we have the biggest impact. We strive to be the example everyone can rely on, so our testing practices have to be a way to move forward.

Testing workshop

The workshop consisted of three parts, as well as a 30-minute warm-up doing a Test Driven Development (TDD) kata, followed by a discussion about TDD. For the first part of the workshop, we introduced a set of bugs on an independent branch, triggered our current automation, and let our developers debug it and write some tests that will aid in future debugging. We did it in small teams so the developers could pair while the SDETs provided help and kept the exercise meaningful. For instance, here’s one of the bugs we purposefully introduced for the workshop (it was about small bugs that might slip through our review process):

I know all of you are math experts, so it will be really easy to find! Of course, it wasn’t as easy for our team members, as the only information they had was the unit tests report, where some apparently unrelated tests were failing. As we were working with unit test, one of our priorities is “good feedback is fast feedback”, so we were checking how long the team needed to find the problem. It took some time to pinpoint but, after discovering where the bug was, some questions arose: How can we add tests that will give us better information if this happens again? How much more testing is required? What can we do to avoid a regression on this change?

The answer to the first question was easy: just add one unit test calculating the correct determinant of a given matrix. But the important question was the second one. Was that enough? Well, this was a synthetical exercise, as GetDeterminant hasn’t seen any change for a really long time. But, knowing some details about how to obtain a determinant, can we think of some test cases that will give us a better coverage than a handpicked matrix? Are there any tests that will result in better feedback? This is not a math class, but the calculations to obtain the determinant of a 4×4 matrix can be reduced to four different operations; having them as different tests lets you find where the problem is just by reading the test names!

The second part of the workshop involved picking an area that could benefit from better coverage, and then writing the tests together. We shared the pain of writing unit tests for legacy parts of the code and discussed how to approach the challenge. What we discovered is how much value you can add to the tests when you understand the feature, as opposed to when you base your tests only on the implementation details. Testing against the implementation doesn’t offer as much value as focusing on User Cases, or functional scenarios; and the tests may need modification when parts of the implementation are changed, even if the functionality is the same. That is one of the main reasons why it is really important to write tests while you are working on the code — adding all these layers of information, and using them as a documentation example of how the functionality should be called and its expected behaviors. When you have to add the tests at a later point, all this information is usually lost, particularly if the person responsible for adding the tests is alien to that part of the code.

To wrap things up, we held an open discussion focusing on all the questions we gathered during the previous exercises, as well as general questions regarding test practices in the organization. When are we doing enough testing? How shall we deal with legacy code? How much impact does good test coverage entail when debugging? The workshop allowed us to bring the importance of testing to the table, and by the end we were able to establish guidelines that will be followed during our code review process.

Conclusion

Yes, I know, I can clearly hear the question that you’ve been chewing on: what was the purpose of this? What was the impact? Some of what we achieved during the workshop included the different groups inside the team having many “eureka” moments regarding testing; gaining an understanding of the pain points our Sustained Engineering peers suffer when debugging and patching legacy code; and agreeing on important (and less important) details about testing that should be discussed in our reviews. But it’s way more than that. The workshop also served as a way to demonstrate the influence of having someone taking an SDET’s role on a team used to other approaches to testing, and it also affected some of the group dynamics. It raised awareness of testing practices in our team, and it served as an example of good practices that we can show to the rest of the company. It is still too early to measure the entirety of the impact, but I feel that it has already paid off the 2 days we invested on it. A fun example of the effect was when Scott (our team lead) shared this message while working on his Hackweek 2017 project:

But I am eager to know if you have experienced a similar activity in your organization. What is your approach to testing? If you are working in the game industry, do you have a role similar to SDET? Are you using any test automation? Also remember that Unity is always looking for talent, and I would love to have a new SDET peer! Please check out our careers page.

Feel free to contact me on through Linkedin.

6 评论

订阅评论

评论被关闭。

  1. Mikael Högström

    八月 1, 2017 9:32 上午

    I’d love to do more testing of game code but the lack of test coverage metrics has stopped me from adopting a TDD approach. Right now we instead try to break out what does not have to inherit from unity object into other dll’s and do testing on those. We also use SonarQube for measuring code smells and alike on those libs in our automated build process on TFS

    1. Coverage is always a useful byproduct from testing but I’ve worked on several products where we didn’t look at that number. Testing, particularly native or low-level ones, serve as an amazing safety net that will be of immense help on any refactor, which, in my experience, happen WAY too often.

      And the approach of DLL is pretty smart. Having the contract of “Ok, I’m using this library, and I can assume that it works as expected” allows us to build small interconnected pieces, hopefully bringing down complexity (unless you make a mess with the dependencies…).

      Thanks for sharing your experience!

  2. Test automation is important when the organisation and code base grows.

    However I’m not seeing any important improvements regarding RELEASE. You guys are releasing too fast to everyone, and this means you’re spreading lot of human errors unnecessarily and relying on your QA teams to report them back. You should adopt a multi-ring release methodology, and only release to the general public every 3 or 4 months with beta for anyone, whereas working tightly with QA & Asset Store publishers to ensure releases are solid.

  3. Automated testing is something i’m very interested in. I’ve blogged about it in the past: http://www.tallior.com/whats-new-automated-testing-unity/

    And also tried discussing it on the forums: https://forum.unity3d.com/threads/the-mother-of-all-unit-automated-testing-threads.155129/

    I’ve also ran a survey to see how other developers are testing (or not?) their Unity games: https://www.surveymonkey.com/r/Y9FYVHZ

    At the end of the day, we still have very little tests, and it seems others have little or none as well. It’s quite difficult to isolate game logic so it’s possible to be tested via unit tests, and very cumbersome to create integration tests using the “real” game code.

    Things may have changed a bit as i’ve seen the “play mode” test runner being added, so i’ll have to revisit the subject again soon.

  4. how can I show and discuss about the stuff I have built?

  5. I have recently started looking at TDD katas. Solution to the above mentioned kata and others is in my repo here: https://bitbucket.org/sirgru/tdd_katas/src/ . I don’t believe in doing the same katas every week but I do believe they are a good exercise to do after coming back form a break from programming.