Search Unity

Robotics simulation in Unity is as easy as 1, 2, 3

Share

Is this article helpful for you?

Thank you for your feedback!

Robot development workflows rely on simulation for testing and training, and we want to show you how roboticists can use Unity for robotics simulation. In this first blog post of a new series, we describe a common robotics development workflow. Plus, we introduce a new set of tools that make robotics simulation in Unity faster, more effective, and easier than ever.

How to leverage Unity using robotics

Because it is costly and time-consuming to develop and test applications using a real robot, simulation is becoming an increasingly important part of robotic application development. Validating the application in simulation before deploying to the robot can shorten iteration time by revealing potential issues early. Simulating also makes it easier to test edge cases or scenarios that may be too dangerous to test in the real world. 

Key elements of effective robotics simulation include the robot’s physical attributes, the scene or environment where the robot operates, and the software that runs on the robot in the real world. Ensuring that these three elements in the simulation are as close as possible to the real world is vital for valid testing and training. 

One of the most common frameworks for robot software development is the Robot Operating System (ROS). It provides standard formats for robot descriptions, messages, and data types used by thousands of roboticists worldwide, for use cases as varied as industrial assembly, autonomous vehicles, and even entertainment. A vibrant user community contributes many open source packages for common functionalities that can bootstrap the development of new systems.

Roboticists often architect a robot application as a modular set of ROS nodes that can be deployed both to real robots and to computers that interface with simulators. In a simulation, developers build a virtual world that mirrors the real robot’s target use case. By testing in this simulated ecosystem, users can iterate on designs quickly before testing in the real world and ultimately deploying to production.

A common robotics development workflow, where testing in simulation happens before real-world testing

 

This blog post uses the example of a simple pick-and-place manipulation task to illustrate how users can leverage Unity for this simulation workflow.

1: Defining the robot’s task

Following the above workflow, let’s say that our robot’s task is to pick up an object and place it in a given location. The six-axis Niryo One educational robot serves as the robot arm. The environment is minimal: an empty room, a table on which the robot sits, and a cube (i.e., the target object). To accomplish the motion-planning portion of the task, we use a popular set of motion-planning ROS packages collectively called MoveIt. When we are ready to start the task, we send a planning request from the simulator to MoveIt. The request contains the poses of all the robot’s joints, the cube’s pose, and the target position of the cube. MoveIt then computes a motion plan and sends this plan back to the simulator for execution.

 

Now that we’ve set up the problem, let’s walk through how to use Unity in this simulation workflow.

2: Bringing your robot into simulation

A robotics simulation consists of setting up a virtual environment — a basic room, as in this example, or something more complex, like a factory floor with conveyor belts, bins, tools, and parts — and adding to this environment a virtual representation of the robot to be trained or tested. The Unity Editor can be used to create endless permutations of virtual environments. But how can we bring our robots into these environments?

 

When modeling a robot in simulation, we need to represent its visual meshes, collision meshes, and physical properties. The visual meshes are required to render the robot realistically. Collision meshes are required to calculate collisions between the robot’s “links,” the rigid members that connect joints, and other objects in the environment, as well as with itself. These meshes are typically less complex than visual meshes to allow faster collision-checking, which can be compute-intensive. Finally, the physical properties, like inertia, contact coefficients, and joint dynamics, are required for accurate physics simulation — that is, for computing how forces on the links result in changes to the robot state, e.g., pose, velocity, or acceleration.

 

Lucky for us, when using the ROS development workflow, there is a standardized way of describing all these properties: Universal Robot Description Format (URDF). URDF files are XML files that allow us to specify these visual, collision, and physical properties in a human-readable markup language. URDF files can also include mesh files for specifying complex geometries. The example below shows an excerpt from the URDF file for the Niryo One robot.

URDF of Niryo One robot

 

To make it easier for roboticists to import their robots into Unity, we’re releasing URDF Importer, an open-source Unity package for importing a robot into a Unity scene using its URDF file. This package takes advantage of our new support for “articulations” in Unity, made possible by improvements in PhysX 4.1. This update allows us to accurately model the physical characteristics of a robot to achieve more realistic kinematic simulations.

 

When installed in the Unity Editor, this package allows the user to select a URDF file to import. It parses the XML file behind the scenes and stores the links and joints in the appropriate C# classes. It then creates a hierarchy of GameObjects, where each GameObject is an ArticulationBody component representing a particular link in the robot. It assigns properties from the URDF to the corresponding fields in ArticulationBody. When users add a robot to Unity, the URDF Importer automatically creates a rudimentary keyboard joint controller. Users can replace this controller with a custom controller using the ArticulationBody APIs. 

 

For example, here is the Niryo One Unity asset, created after importing the URDF file above.

A virtual Niryo One robot in Unity, imported via URDF Importer

3: Connecting your simulation to ROS

Now that the robot is in the Unity Editor, we should test our motion-planning algorithm, running in a set of ROS nodes. To support this, we need to set up a communication interface between Unity and ROS. Unity needs to pass messages to ROS that contain state information — namely, the poses of the robot, target object, and target location — along with a planning request to the mover service. In turn, ROS needs to return a trajectory message to Unity corresponding to the motion plan (i.e., the sequence of joint positions required to complete the pick-and-place task).

 

Two new ROS–Unity Integration packages now make it easy to connect Unity and ROS. These packages allow ROS messages to be passed between ROS nodes and Unity with low latency; when tested on a single machine, a simple text-based message made the trip from Unity to a ROS subscriber in milliseconds and a 1036 x 1698 image in a few hundred milliseconds. 

 

Since communication in ROS uses a pub/sub model, the first requirement for ROS–Unity communication is classes in Unity corresponding to ROS message types. When users add the  ROS-TCP-Connector Unity package to the Unity Editor, users can use the MessageGeneration plugin to generate C# classes, including serialization and deserialization functions, from ROS .msg and .srv files. The ROS-TCP-Connector package also includes scripts that the user can extend to publish messages from Unity to a ROS topic, subscribe in Unity to messages on a ROS topic, and create ROS service requests and responses. On the ROS side, a ROS package called ROS-TCP-Endpoint can create an endpoint to enable communication between ROS nodes and a Unity Scene using these ROS-TCP-Connector scripts.

 

Let’s now take a look at how to use these ROS–Unity Integration packages for the task at hand. First, we use the ROS–Unity Integration packages to create a publisher in Unity to send the pose data to ROS over TCP. On the ROS side, we need to set up a ROS-TCP-Endpoint to subscribe to these pose messages. 

 

Next, we will create a “Publish” button in the Unity Scene along with an OnClick callback. This callback function makes a service request to the MoveIt motion planner. The service request includes the current pose of the robot, the pose of the target object, and the target location. When MoveIt receives the planning request, it attempts to compute a motion plan. If successful, the service returns the plan, i.e., a sequence of joint positions, and a Unity script executes the trajectory using the ArticulationBody APIs. Otherwise, it returns a failure message.

 

The gif below shows a Unity simulation of the Niryo One arm successfully performing the pick-and-place task.

Simulation of a pick-and-place task on a Niryo One robot in Unity using ROS and MoveIt for motion planning

 

This example is only the beginning. Developers can use this demo as a foundation on which to create more complex Unity Scenes, to add different robots, and to integrate other ROS packages. Stay tuned for future posts that cover integrating computer vision and machine-learning-oriented tasks into a robotics simulation framework.

Conclusion

These tools lay the groundwork for a new generation of testing and training in simulation and make it easier than ever to use Unity for robotics simulation. Our team is hard at work enabling these next-generation use cases, including machine-learning training for robotics, sensor modeling, testing at scale, and more. Stay tuned for our next blog post in this series, which will show you how to train a vision-based machine-learning model to estimate the target object’s pose in the pick-and-place task.

Next steps

Get started with our robotics simulation tools for free. Check out our pick-and-place tutorial on GitHub.

For more robotics projects, visit the Unity Robotics Hub on GitHub. To see how our team is making it easier to train computer vision systems using Unity, read our computer vision blog series. 

For more information on how Unity can be used to meet your robotics simulation needs, visit our official robotics page

If you’d like to contact our team directly with questions, feedback, or suggestions, email us at unity-robotics@unity3d.com.

Is this article helpful for you?

Thank you for your feedback!