Search Unity

Today, we’re making a major upgrade to the Unity ML-Agents Toolkit to leverage the Unity Inference Engine, a new library that enables cross-platform inference. This upgrade will enable developers to run the neural network models produced by the Unity ML-Agents Toolkit on all the platforms that Unity supports. In this blog post, we’ll introduce the Unity Inference Engine and describe the changes to the ML-Agents Toolkit workflow. We’ll also demonstrate how Jelly Bowl, by WhatUp Games, is using the Unity Inference Engine to ship a character behavior, trained using the ML-Agents Toolkit, to the Xbox platform.

The Unity ML-Agents Toolkit has been used by game developers to train the behavior of both playable and non-playable characters. The ability to train character behaviors by specifying high-level reward signals and/or demonstrations in the Unity Editor can enable developers to craft character behaviors in a more robust and time-efficient manner than scripting them. The character behavior produced by the Unity ML-Agents Toolkit is encoded in a neural network model file generated by TensorFlow. In previous versions of the ML-Agents Toolkit, we relied on a third party plugin, TensorFlowSharp, to enable developers to run the model. Unfortunately, the plugin limited us to only five platforms: Windows, Linux, Mac, iOS, and Android, and we were unable to test on the 20+ platforms that Unity supports.

Today, this changes. We are delighted to announce that the ML-Agents Toolkit now uses the Unity Inference Engine (codenamed Barracuda) to deploy neural networks trained by the ML-Agents Toolkit. The Unity Inference Engine represents a giant leap towards fast, efficient and reliable cross-platform neural network inference.

The Unity Inference Engine

One of our core objectives is to enable truly performant, cross-platform inference within Unity. To do so, three properties must be satisfied. First, inference must be enabled on the 20+ platforms that Unity supports. This includes web, console and mobile platforms. Second, we must enable GPU support across a wide array of manufacturers, which will be critical for executing large neural networks. Third, we must provide the best possible integration with the Unity engine and Editor. For example, you should be able to render an image in the game and pass it directly to the inference engine, without any additional memory copies or GPU stalls. While there are a number of popular inference libraries such as Tensorflow Lite, WinML, and CoreML, none of them on their own provide the level of support needed. As such, we invested in developing our inference solution, dubbed the Unity Inference Engine (codename Barracuda).

The Unity Inference Engine is the product of the Unity Labs research team. It is based on cross-platform Unity technologies like IL2CPP, Burst and Unity Compute Shaders, which allow us to provide great performance across all Unity supported platforms while keeping its size very small (currently 600KB). The Unity Inference Engine can run neural networks on CPUs or on any GPU that has Compute Shader capabilities. You are free to experiment with either CPU or GPU to fit your workload and latency requirements. For models trained with the ML-Agents Toolkit that do not rely on visual observations and directly affect gameplay – CPU is the optimal option. But, feel free to experiment and share your experience with us!

Today, the Unity Inference Engine is bundled with the ML-Agents Toolkit. In the future, as the project evolves, it will become its own stand-alone Unity Package, which will enable it to power other deep learning applications. Leveraging the Unity Inference Engine in the ML-Agents Toolkit brings about a number of improvements to our users. This includes an improved installation experience, an enhanced Editor workflow and smaller build sizes that can be deployed on all the platforms that Unity supports. In the next section, we will dive into these improvements in more detail.

Improved workflow and cross-platform support

In prior versions of the ML-Agents Toolkit, users needed to download a separate Unity Package containing the TensorFlowSharp libraries to run neural network models inside Unity. In ML-Agents Toolkit v0.7, the Unity Inference Engine is included by default. There is no additional library to download or integrate, and the ML-Agents Toolkit can now run your models out of the box on the platforms that Unity supports. It also allows IL2CPP compatibility on the Android platform, which brings you one step closer to supporting Google’s upcoming 64-bit requirement. We have tested the Unity Inference Engine on PC, Mac & Linux Standalone as well as Android and iOS, more information here. And, of course it works in the Unity Editor: trying out one of our demo scenes is now as easy as pressing the Play button!

In this new release of the ML-Agents Toolkit, the training process produces a new .nn file format, instead of the original .bytes files. This new format is compatible with the Unity Inference Engine and will make it easy to filter your assets in the hierarchy. We also have a new easy-to-identity icon for the .nn model files.

In the Learning Brain Inspector window, you can specify whether to use the CPU or GPU for running inference. Note that for small models, the CPU option is faster because the data remains on the CPU, while the GPU option is useful for large models such as the ones that use visual observations.

Another nice improvement is the size of the Unity Inference Engine. It is very small compared to the binary size of TensorFlowSharp which makes it a lot easier to deploy on mobile devices. For instance, building the 3D Balance Ball environment on iOS yields a build size of 135 MB with TensoFlowSharp but only 83.5 MB with the Unity Inference Engine.

The Jelly Bowl demonstration

At Unite Los Angeles last October, we demonstrated the first game, Jelly Bowl, that integrated a previous version of the Unity Inference Engine. Jelly Bowl is an Xbox game developed by What Up Games in which each player will battle up to five other players. The goal is for each player is to collect as many energy crystals as possible and bring them back to their respective bases before time runs out. The player with the most energy crystals in their base at the end of the round wins. If a player is hit by another player, they drop all of their energy crystals – opening them up for others to steal.

Jelly Bowl used the ML-Agents Toolkit to train the behavior of playable characters that can be used instead of real players. This enables single-player modes for the game where a human player competes against trained agents. The Unity Inference Engine was then used to run the behavior of these trained agents on the Xbox platform. For What Up Games, using a trained agent was not only easier to implement but produced a more realistic behavior that better adapted to the environment. Additionally, leveraging the Unity Inference Engine was the only supported path to running the neural network model on the Xbox platform.

Next steps

This release of the Unity ML-Agents Toolkit takes a leap in providing cross-platform support for embedding trained behaviors into your game. If you use any of the features provided in this release, we’d love to hear from you. For any feedback regarding the Unity ML-Agents Toolkit, feel free to email us directly. If you encounter any issues or have questions, please reach out to us on our ML-Agents GitHub issues page.

10 评论



  1. If only these recent moves were a sign that Python was becoming a first class supported language in Unity. That would be AWESOME!

  2. Thanks Unity ML Team!
    I believe that in the near future the Reinforcement learning, or in general, ML/neural nets will play big role in games. And later in real-life tasks as well.

  3. Hi, Anyone having trouble dragging the .nn file to the model? My .nn files seem to be ignored by the editor, although they do get the nice new icon.

    1. Hi Michiel – can you please submit an Issue to the ML-Agents GitHub Repo? This is the best way to get in touch with the ML-Agents team and the broader ML-Agents community for feedback.

    2. Hey, Michiel Smuts try replacing “ENABLE_TNESORFLOW” with “ENABLE_BARRACUDA” in the scripting define symbols it should allow you to drag the .nn files then. I had this similar issue before.

  4. Thank you to the team for that great upgrade.
    ML-Agents is opening so many possibilities for independent developers!

  5. Hi, will the inference engine be available for developers to use? Is there any plans to allow it to be used as something like a Prolog inference engine with backwards chaining? Or maybe lisp?

    1. ReJ aka Renaldas Zioma

      三月 2, 2019 1:57 下午

      Yes, inference engine will be available to developers.
      And no, no backward chaining is planned. Unlike inference in logic languages, we aim only at inference strictly in neural networks terms – forward pass only.

    2. Yes, inference engine will be available to developers.
      And no, no backward chaining is planned. Unlike inference in logic languages, we aim only at inference strictly in neural networks terms – forward pass only.

  6. Congratulations on v0.7 release! New inference engine is very welcome addition!

    Would like to mention that official Unity discord group ( also got #machine-learning channel in case one wants to join and discuss about ML Agents :)