On behalf of the Unity ML-Agents team, we want to wish everyone and their loved ones a happy holiday and new year! As we close out 2020, we wanted to take a moment to highlight a few of our favorite community projects in 2020, recap our progress since our v1.0 release (Release 1) in April 2020, and provide an overview of what’s in store for 2021.
A few of our favorite community projects
Thank you to our entire community for all your contributions and feedback to the growth and evolution of the Unity ML-Agents Toolkit. We continue to be amazed by the creativity of our developers in illustrating new kinds of behaviors and approaches using deep learning. As we close out the year, we wanted to showcase some of our favorite projects in 2020. If you would like to share your projects, please share them in our forum. If you share your project on social media, remember to tag your posts using #mlagents.
A.I learns to play a game with an Xbox controller
From the virtual world to the real world and back into the virtual world. Created by LittleFrenchKev.
AI Learns Parallel Parking — Deep Reinforcement Learning
If only we could create an Agent to parallel park our cars in real life. Created by SamuelArzt.
Unity ML-Agents robot simulation transferred to a real-life robot
Illustration of an ML-Agents trained model being transferred into a real-life robot. Created by jsalli.
Competitive Self-Play | Unity ML-Agents
Recap since ML-Agents Release 1
Release 1, which came out in April 2020, was heavily centered around API stability, ease of installation, and releasing a verified Unity package. Since Release 1, we have prioritized shipping incremental improvements and bug fixes on a monthly basis in order to improve the stability of existing features. All of the notes and documentation for these improvements and bug fixes can be found in the release notes here.
In addition to these improvements and bug fixes, we have also shipped several new features to support training intelligent Agents in Unity projects.
- Observable Attributes — Enables developers to mark up Agent fields and properties to be turned into observations via reflection
- IActuator interface and ActuatorComponent — Enables developers to compose behaviors onto Agents. Allows for abstraction of the Agent actions.
- Stacking for compressed observations — Allows stacking for visual observations and other multi-dimensional observations.
- Grid Sensor — Combines the generality of data extraction from raycasts with the computational efficiency of CNNs. Allows the collection of arbitrary data from any number of GameObjects while enabling much faster simulation and training
- Random Network Distillation (RND) — An intrinsic reward signal to the Pytorch trainers that promotes exploration by rewarding Agents for discovering new observations,
- Support for discrete and continuous actions — Individual Agents can now take both continuous and discrete actions which better represents game development scenarios like gamepads.
- Unity Environment Registry — Database of pre-built Unity environments that can be easily used without having to install the Unity Editor
- PyTorch Trainers — All existing reinforcement learning and imitation learning training algorithms have been migrated from TensorFlow to PyTorch. Moving forward, all further algorithm development will leverage PyTorch which will help accelerate development.
Preview for 2021
As part of our own internal research and development, we are constantly experimenting with new algorithms and approaches with different kinds of games. Specifically, those that we see get requested via GitHub issues or in the forums. To this end, we focus on specific tasks an Agent might need to perform in the context of real game situations. For example, we recently shipped a prototype Match-3 environment to illustrate how to represent the board as an observation in order for an Agent to play through different levels. We strongly believe in illustrating implementations of ML-Agents in specific types of game situations to help developers overcome the hurdles of getting it to work and accelerating the development of new algorithms and approaches that can benefit the entire community.
In the first half of 2021, we will be focusing on some specific algorithmic improvements
- Cooperative multi-agent behavior — Enable easier and more effective cooperative behaviors between Agents in order for Agents to work toward a common goal
- A single model capable of solving various tasks — When an Agent needs to display different behaviors depending on the context, the approach is to train different models for each context. We will work towards enabling a single model to perform different tasks based on a context input.
- The ability for Agent to observe a varying number of entities — Enable Agents to observe a variable number of objects in the scene. For example, if the number of objects an Agent needs to collect changes over time.
Throughout 2021, we will be releasing these algorithmic improvements on GitHub and illustrating them through a cooperative shooter demo game. If you are interested in these algorithmic improvements, please post a request on GitHub.
Unity ML-Agents Cloud Training
Training Agents using the ML-Agents Toolkit requires experimentation to ensure that you’ve set-up the environment and training configurations correctly. This experimentation can be time-consuming and computationally expensive to complete locally on a regular laptop or desktop machine. To help alleviate this, as we announced in our v1.0 blog post, we’ve been working on a cloud service for ML-Agents training. This cloud service would enable you to kick-off multiple training sessions in parallel that run on our cloud infrastructure. This would enable you to complete your experimentation in a significantly shorter period of time. More specifically, ML-Agents Cloud has three key benefits:
- The ability to spin up multiple ML-Agents experiments without installing Python or our Python packages.
- The ability to run multiple ML-Agents training sessions in parallel.
- The ability to request compute resources for each of your experiments that outpower your local hardware enabling each training session to finish faster.
Today, we currently have an alpha program for a handful of select Unity ML-Agents users. The alpha version of Unity ML-Agents cloud training core functionality includes:
- Upload your game builds with ML-Agents implemented (C#)
- Start and manage training experiments
- Download results from multiple training experiments
In 2021, we plan to further accelerate the development of cloud training for Unity ML-Agents in order to improve and ultimately make available the service for all users. If you are interested to sign up to be considered for the alpha program, please sign up here.
Thank you and happy holidays!
If there are additional features or game genres that you are interested in, please post a feature request on GitHub.
If you use any of the features provided in this release, we’d love to hear from you. For any feedback, general issues, or questions regarding ML-Agents, please get in touch with us on the ML-Agents forums or email us directly. If you encounter any bugs, please reach out to us on the ML-Agents GitHub issues page.
If you’d like to work on this exciting intersection of machine learning and games, check out our current openings.