Search Unity

Project MARS, Mixed and Augmented Reality Studio, is a new Unity toolset specifically designed to help our creators make better spatial experiences and games that can run anywhere in the world. It has two key parts: a Unity extension and companion apps for phones and AR head-mounted displays (HMDs).

We are at a fascinating inflection point for computers. The rise of ubiquitous sensors and fast processors means that we can finally move towards the spatial computing vision that has been described and tested in various forms since the 1960s. At last, we have a variety of small, flexible computers that can take in information about the world – and do something interesting with it.

Unity has long been used to make digital worlds for games and simulations, but we could only experience them through a window, using peripherals to have our avatars run around vast worlds while we sit on our couches. Virtual reality (VR) took us closer by allowing people to step into the window. Mixed reality (MR) lets digital objects step through to the other side of that window, out into the real world.

A whole class of creators is learning how useful Unity can be for creating mixed reality experiences. We can use all of the same systems we built for digital worlds – animation, physics, and navigation, for example – to test and build apps that run in and respond to the real world.

But as the Unity Labs’ Authoring Tools Group dug into the use cases for AR/MR, we realized that not only do apps need to work in the real world, but we also need to get more information about the real world back into the Editor. We need easy ways to tell Unity what’s real and what’s not and to let us design, develop, and test our real-world applications more easily.

Enter Project MARS, a new Unity toolset specifically designed to help our creators make better spatial experiences and games that can run anywhere in the world. MARS stands for Mixed and Augmented Reality Studio. It has two key parts: a Unity extension and companion apps for phones and AR head-mounted displays (HMDs).

We announced Project MARS at Unite Berlin in 2018. As we near beta release, we wanted to share an overview of the toolset and new features we’ve been building since the initial announcement.

Key MARS features and the problems they solve

The simulation view

 

The Simulation View is one of the most significant new features of MARS. One curious property of MR/AR apps is that there are two world spaces that need to be defined: the Unity world space and the real-world space. The simulation view provides a place to lay out objects and test events in a simulated real-world space. It’s a new dedicated window in the Editor that lets you input real or simulated world data, like recorded video, live video, 3D models, and scans, and start laying out your app directly against this data. This window includes tools and UI to see, prototype, test, and visualize robust AR apps as they will run in the real world.

The simulation view is straightforward to explain, but developing it required us to create a complex system to address what we’ve dubbed “the simulation gap.” The simulation gap is the difference between the perfect information computers have about digital objects, and the reality of current devices and sensors that can only detect partial, imperfect data. We solve this in a variety of ways, from simulated discovery of surface data to our robust query system. We’ll delve deeper into these in an upcoming blog post.

The simulation device view

 

The simulation device view is the flip side of the simulation view. As well as simulating the world in the Unity Editor, you also simulate a device moving around the world. This perspective lets you quickly experience your app the same way most of your users will on a mobile AR device. Not only will this help you start to see if your AR app works well across different spaces without requiring you to physically test in each one, this view will also significantly reduce iteration time as you build your AR apps. You can control the camera like a device using your keyboard and mouse, or a device running the companion app, streaming real data into the Editor.

New ways to describe real-world concepts

MARS has a series of new constructs to let us describe, reason about, and visualize real-world objects in our workflows. Conditions define objects, and multiple objects define scenarios.

We start with Conditions, which describe individual characteristics were looking for: an objects size, its GPS location, its shape, and so on.

Then, we define a Real World Context, described as a set of Conditions. For example, to describe a table, we could use Conditions for Surface Size (eg. this surface is at least 1×1 meter), Alignment (This surface is horizontal), and Elevation (This object is at least a meter off the ground).

Its important that these Conditions be fuzzy and tolerant enough to handle the variation of spaces that users will be in, so many spatial conditions are defined by a minimum and maximum range (for example, This surface is between 1×1 meters and 3×3 meters). These spatial conditions draw scene gizmos, which let you visualize and tweak the range in the Editor.

While a lot of these examples are about the size, height, geolocation, and other spatial properties of objects, Conditions dont have to be spatial. For example, you can define a Condition for time of day or the weather (This content should only happen at noon on sunny days).

To create more complex and specialized behavior, we can string these Real World Contexts together into groups to describe larger scenarios. Say you want to create an AR video streaming app which puts playback controls on your coffee table, the video library on your bookshelf, and the virtual screen on the biggest wall in the room. You start by defining each of those Real World Contexts (table, bookshelf, wall), then group them all into this description of a room containing multiple real objects.

Of course, at any stage in these descriptions of reality, you have to consider that the objects youre describing may not exist in the users environment. For example, if a user doesnt have the bookshelf in the previous example, you dont just want your app to fail, but instead to gracefully adjust to a simpler set of requirements. For this, we provide Fallback events, where you can define a second-best scenario, and then a worst-case scenario (for example, the user only found a single surface). This layering of Ideal Acceptable Minimal» lets you balance deeply contextual behavior in the best-case scenario, where the user has carefully mapped an environment that generally resembles what you designed the app for, with functional behavior in the worst case, where the user is in a very unexpected environment and/or hasnt scanned much.

In summary, Conditions describe individual properties; a set of Conditions describes a Real World Context; a set of Real World Contexts describes the whole environment you expect, or pieces within it.  With these elements, you can describe an “Ideal → Acceptable → Minimal” layering of states for your app.

You can also define specific characteristics that match to your underlying tech stack with Trait Conditions, or named properties. Depending on the device or software you’re using, you can name anything from semantic room objects to 3D markers to positioning anchors. We’ve kept this as flexible as possible so that it can work well with any upcoming world-data technology, as well as AR Foundation’s supported property types. Today, we use Traits like “floor,” “wall,” or “ceiling,” but looking ahead, these traits open the vast possibility of recognized objects (“cat,” “dog”) and properties (“wood,” “grass”). When it comes to this field, each month brings exciting new updates, and we need to make sure we’re able to support all of them.

Once you’ve defined what you’re looking for and where your objects should go, you might want to get more granular about object placement. We’ve created a system of Landmarks, which let you be more precise about where objects should be placed and oriented on a matched Real World Context.

Advanced data manipulation

Reasoning APIs are an advanced feature: scripts users write that can interface with the entirety of MARS’ world understanding at once, rather than one piece of data at a time. This allows you to make advanced inferences and combine, create, and mutate data.

A classic example of this is being able to infer the floor as being the lowest, largest plane you can find after scanning the space. Some devices, like the HoloLens, give you a floor by default, but other sensors and tech stacks do not. The Reasoning API lets you mix and match input to come up with even better real-world understanding and more interesting events. Importantly, this code stays out of your application logic.

The companion apps

As much work as we’ve done putting the real world into the Unity Editor, we’d be remiss if we didn’t take advantage of the portable devices that work well with real-world data in space. That’s why we’ve created MARS companion apps for AR devices. The first iteration is for mobile phones: you can connect the app to your project in the Unity Cloud, then lay out assets as easily as placing a 3D sticker. You can create conditions, record video and world data, and export it all back straight into the Editor. It’s another step in closing the loop between the real world and the digital.

Next steps

Alpha access

MARS is currently in closed alpha, but we’re looking for dedicated teams to partner with who are trying to push the bounds of spatial applications. We want people to battle-test MARS and help us prioritize our own roadmaps by giving us feedback on the tools and features that would help them the most. We’ve put down the foundations, but we want to make sure we’re building the right thing so that you can create amazing experiences.

Acknowledgments

The MARS project has been inspired and informed by our hardware and software partners at Microsoft, Magic Leap, Google, Facebook, and many other companies working at the frontiers of what’s possible: from location-based virtual experiences to automotive visualizations, space simulators, architectural previz, innovative mobile AR games, and more. To all of the companies we’ve talked to and partnered with, many thanks, and a special thanks to Mapbox for co-building the first geospatial integration.

Were also building on prior work from our own Mixed Reality Research Group, and collaborating closely with the XR Platforms team. Their AR Foundation and the XR Interaction Toolkit have provided a solid tech base on which to build MARS.

If you’re interested in learning about MARS and staying up to date as we move towards a wider release, please sign up and check out our new Project MARS webpage.

leave_a_reply to JohnНажмите, чтобы отменить ответ.

Вы можете использовать эти теги и атрибуты HTML: <a href=""> <b> <code> <pre>

  1. Great job guys with the MARS project! Really promising! Looking forward to test the beta version when it comes out!
    I do have couple of questions though.
    1- Would I be able to connect real-time data from a kinect sensor (cloud data) to automatically generate the floor/walls and 3D objects like a chair or a box at the right place within the 3D world to match the real life 3D cloud? All in real-time?
    2- Do we have any estimate for when the beta will be released?

    Amazing work!

    Cheers,

    1. 1. Not exactly. You can take in 3D meshes, point clouds, or models and put them in the simulation view, and tag them with semantic data. You can also bring in a live camera feed. But we don’t have Kinect support directly streaming to the sim view at this time.
      2. No public date for the beta, but contact us directly at mars@unity3d.com if you’d like access to the private alpha.

  2. I get «Invalid length for field value» in the form for both «Have you made an AR app before» and «What would be the #1 feature». Can’t sign up (or have to put the text somewhere else).

    1. Additionally, I’m not allowed to have URLs in the field where you ask about projects. You make it really hard to fill in this form :)

    2. So sorry about that! Please email us directly at mars@unity3d.com. :)

  3. Hi Unity team!
    MARS sounds cool, but there is a fundamental question about its concept that is not clear to me after reading this post. Will MARS focus on
    a) AR apps that work in any user environment, so e.g. in your living room as well as in mine;
    or
    b) Persistent AR experiences that work exclusively at one specific real world location (including solutions for re-localisation / area description files / 3D scanning and mapping the environment);
    or
    c) both?

    If persistent AR is on your agenda, we would love to help you with testing and improving MARS. Please contact us via https://www.ar-action.com/contact/

    1. Hi Matt! Yes, MARS is focused on _both_ adapting your app to lay itself out in any environment, and also persistent location-specific authoring — we’ll dig more into that in future posts. They aren’t separate workflows, because even if you do have an accurate scan of a location that you intend your app to relocalize in, there will still be unknown dynamic elements (people, movable chairs, cars, weather, time…) that your app can respond to. So we try not to think of it as either procedural or static, but often some combination of both.
      I’ll ping you on that contact form :)

  4. project MARS is cool! anyway, where is DOTS sample project?

    1. This is not a feature request forum, please use the forums for feedback and feature requests rather than using the blog posts and effectively spamming + attempting to derail discussion.