Unite 2013 Training Day
This year at Unite 2013, two hundred and fifty people arrived at the Unite conference a day before the keynote, to take part in the training day. Demand was high for this event, and the tickets for training day were sold out well in advance.
We’ve had training days at Unite before, but this year we decided to turn things up a notch and create a hands-on workshop where all the attendees could start with an empty scene and put together a complete game in a single day.
This was a fairly ambitious undertaking — I’ve trained groups of people onsite at lots of companies and at lots of events, but this group was about ten times the size of the ones I’d worked with before. It was vital to ensure that nobody would get left behind, despite the size of the crowd. Expectations were high.
The content I prepared for the day was partly based on material that I’ve delivered to customers, combined with some brand-new stuff that was put together just for the training day. By far one of the most requested areas of training from customers is our character animation system, so I went with the choice of a third-person action game as a base. I also added a few carefully selected game elements. These were things that many games have in common: collectible items, firing projectiles, hazards to jump or dodge. Each element was designed so as to give an opportunity to learn a different area of Unity’s specialized features such as particle systems, editor gizmos, animation masking and path finding.
I also wanted to give the attendees the opportunity to feel as though they had created something of their own design, rather than building an exact replica of an existing project. To achieve this, I decided to provide a simple collection of tile-based environment assets designed to snap together (using Unity’s snap tools), which allowed the users to quickly design their own floor plan of a level, and choosing where corridors and rooms should go in their game. I then added to this an editor script, which would automatically place walls and corner pieces around the floor plan they had created.
Editor scripting and customising the editor in general is one of my favourite features, and it’s something that a lot of larger customers really appreciate about Unity, since they often build very specialised tools to speed up development of their own games. I wanted to give our attendees a taster of this, so after a quick introduction to the concepts of editor scripting, and some examples of what it can do, we let them loose with the «Auto Walls» script, and soon there were towers and crenellations as far as the eye could see.
We then took a crash course in physics, including setting up a controllable capsule-based physics character from scratch — solving problems such as how to achieve the typical «collide and slide» behaviour that gamers expect from a character, before moving on to adding an animated character, and driving the movement with the animation root motion.
Our animation system is an extremely powerful piece of software, and while its visual editors make much of the work easy and intuitive, it’s still a complex system — so we had an in depth look at exactly how the various pieces fit together, including creating animation loops, understanding humanoid avatar rigs, and setting up state machines in the animator controller window.
The spiky medieval hazards not only demonstrated collision detection and tagging, but also served as a way do show how the editor window can be enhanced with icons and lines drawn into the scene view by code – allowing programmers to provide their level designers with specialised visual tools for positioning and manipulating the game elements.
We looked at creating projectiles so the character can throw tomatoes (historically accurate — there was a lot of tomato-throwing in medieval castles), which involved a bit of scripting — but also required learning about the use of Avatar Masks to mix the throwing animation with the character’s existing running/jumping motion.
And near the end of the day, we explored the NavMesh feature, which was recently made available in the free version of Unity. And because we’d introduced examples of componentised architecture earlier in the day, making an animated AI enemies which could chase the player around the castle was a breeze — simply swapping the artwork of the character, and swapping out the User Input component with an AI Input component driven by the pathfinding feature.
In order to make sure the day was at least twice as good as it would have otherwise been, we used two screens and two presenters. The fantastic Mr. Will Goldstone joined me, helping to present the day’s material.
The dual screens allowed us to freely switch between live demonstration and coding on one screen, and steps to follow or things to remember on the other. And with two of us presenting, we managed to reduce the symptoms of “geek coding on stage” to what I think were safe levels.
We also couldn’t have pulled it off without the help of our field engineers who were standing by at the wings waiting to help out anyone who was having problems.
Another fail-safe we had in place to make sure nobody fell too far behind was to include “checkpoint” scenes in the files we gave to all the attendees. This way, if any one particular stage of the day proved insurmountable to complete, the users could load up the current checkpoint scene and automatically catch up. The project we handed out also contained finished versions of all the scripts we would be writing, as well as the completed versions of the assets and prefabs we built. However, I was incredibly impressed with how well everyone did, and as it turned out very few people ended up using the catch up scenes at all. The day went really smoothly, and we even had time for some Q&A at the end.
Thanks to everyone who attended, and everyone who helped make the day what it was – a great success!