Accelerating ML Research: Meet us at NeurIPS 2019
In a few short weeks, Unity will be heading to NeurIPS in Vancouver (December 8–14). We’re sponsoring the main conference and the Women in Machine Learning (WiML) Workshop, as well as co-organizing the NeurIPS 2019 Workshop on Learning Transferable Skills. Learning transferable skills enable intelligent systems to generalize to new domains and tasks easily. This blog post explains why we’re eager to foster research in this area and provides an overview of the workshop we’re co-organizing.
If you’re attending NeurIPS, consider joining our workshop on December 14. It will be packed with expert speakers presenting papers on generalization and learning-transferable skills. If you’re interested in exploring opportunities at Unity, drop by our booth (#324) in the Expo (December 8–11). You can also find us at the WiML Workshop (East Exhibition Hall C) on December 9.
On the importance of transfer learning
After spending several decades on the margins of AI, reinforcement learning has recently emerged as a powerful framework for developing intelligent systems that can solve complex tasks in real-world environments – from playing games such as Dota and StarCraft to teaching a robot hand to manipulate a Rubik’s Cube. However, one attribute of intelligence that still eludes modern learning systems is generalizability. Until very recently, the majority of reinforcement learning research has involved training and testing algorithms in the same, often deterministic, environment. This has resulted in algorithms that learn policies that typically perform poorly when deployed in environments that differ even slightly from those in which they were trained. Even more importantly, the paradigm of task-specific training results in learning systems that scale poorly to a large number of tasks, even when the tasks are interrelated.
For instance, consider our work on learning to play Snoopy Pop from visual inputs using the Unity ML-Agents Toolkit. A game-playing agent that’s been trained on a specific number of levels may not perform well on a new, previously unseen level. Its performance might also begin to suffer if game mechanics are modified. This is problematic since games have become live services with ever-evolving content (e.g., new or changing levels, challenges, and missions). A game-playing agent would continuously need to be retrained to overcome this limitation, which could be time-consuming or prohibitively expensive. We are committed to developing learning systems that can easily generalize and adapt to new tasks or changing game mechanics to overcome this constraint. With the Unity ML-Agents Toolkit, we took the first step to addressing this challenge by providing the capability to train agents on distributions of tasks.
Fortunately, the machine learning research community has recently shown a reinvigorated interest in developing systems that can learn transferable skills. This could mean developing robustness to changing environment dynamics, the ability to quickly adapt to task variations, or a capacity to learn to perform multiple tasks at once (or any combination thereof). This interest has resulted in a number of new data sets and challenges, such as our own Obstacle Tower Environment and the Animal-AI Olympics (made with Unity and leveraging the Unity ML-Agents Toolkit). Both of these challenges demonstrate Unity’s strength as a powerful simulation platform for AI research. The NeurIPS 2019 Workshop on Learning Transferable Skills was organized to provide a forum to further accelerate research in this domain.
NeurIPS 2019 workshop overview
Interestingly, the first-ever workshop on the topic of transfer learning also took place at NeurIPS (then-called NIPS) in 1995. Back then, transfer learning was called Learning to Learn, an acknowledgment that a system’s ability to generalize to new tasks is a core tenant of learning. Since then, this research topic has been studied under many different names, such as life-long learning, knowledge transfer, multi-task learning, knowledge consolidation, meta-learning, and incremental/cumulative learning.
Twenty-four years after that first workshop, we’re excited to co-organize the Workshop on Learning Transferable Skills with Benjamin Crosby and Ben Beyret (from Imperial College London and the organizers of the Animal-AI Olympics). The workshop will include a full day of presentations by invited speakers and authors of peer-reviewed papers. Our invited speakers include David Ha (Google Brain), Raia Hadsell (DeepMind), Vladlen Koltun (Intel), Katja Hofmann (Microsoft Research), Wojciech Zaremba (OpenAI), Karl Cobbe (OpenAI), and Gianni De Fabritiis (University Pompeu Fabra), whose presentations will cover aspects of transfer learning for computer vision, robotics and games.
If you are attending NeurIPS, join us to learn more about how to develop learning systems able to generalize to new tasks and domains.