Search Unity

In this 3-part series we analyze areas we believe should be addressed for mixed reality to become mainstream, and not only to improve existing solutions, but reinvent and influence the future and mass adoption of immersive technologies including augmented reality.

The Mixed Reality Research Group is a recently created team I lead within Unity Labs. The team was brought together to study the impact and future of mixed/augmented reality technologies in the years to come and formulate a strategy to empower Unity developers to be at the forefront of how games and applications will be created, deployed and consumed in what is expected to be a new era of computing. The group conducts advanced research, hypothesizes about the long term future, and predicts scenarios.  Our mission is to assist R&D in properly supporting our developers and get them ready for the long term future.

Why did we create this group?  The reasons are many, but at its core, the purpose of the group is to get ready for nascent technologies that will become mainstream in the long term future. Additionally, another part of our mission is to explore new tech in order to inspire devs and studios by showing them what will be possible in the future; to go beyond “what is proven” by experimenting with new scenarios.

Why focus on Mixed Reality (MR)?  Let us clarify AR, VR, MR and other terminology first. Milgram’s 1994 paper “Augmented Reality: A class of displays on the reality-virtuality continuum” regards  experiences in the purely physical world and experiences in the purely synthetic world of virtual reality as two opposite ends of a spectrum named the Reality-Virtuality Continuum. Within this spectrum, anything that combines (at varying degrees) real and virtual elements on the same display is said to be a Mixed Reality (MR) experience.  Mixed Reality is itself a continuum, and includes Augmented Reality (AR).

The reality-virtuality continuum concept was introduced in Milgram’s 1994 paper “Augmented Reality: A class of displays on the reality-virtuality continuum.”  Image credit: Matteo Valoriani, Etna dev 2016 – Introduction to Mixed Reality with HoloLens

The original definition of virtuality continuum only took into account visual stimuli. In 2009, Jeon and Choi extended it to include the sense of touch.  In this 2-axes continuum, mixed reality encompasses any experience that combines real and virtual visual or haptic elements.  As an example, our research prototype of Carte Blanche (CB) is an Oculus Rift application in which the virtual creation desk is aligned with a physical one in order to facilitate longer authoring sessions due to the added comfort. Additionally, we make use of the Touch controllers API to simulate haptic feedback through vibration.  These elements make CB a mixed reality experience, specifically falling in the visual virtuality – haptic mixed reality (vV-hMR) category.

Researchers Jeon and Choi added the touch dimension to the reality-virtuality continuum, resulting in a 2-axis spectrum where MR consists of experiences that combine real and virtual visual or haptic elements (gray areas in the figure).

 


Our Carte Blanche research prototype is a mixed reality experience that combines real and simulated haptic feedback.  The virtual creation desk is aligned with a physical one, and we also simulate haptic feedback through the Touch controllers API.

Jeon and Choi’s “composite visuo-haptic reality-virtuality continuum” covers just two of the traditionally recognized human senses. What about the others? Work on multisensory immersive experiences has been ongoing for quite some time; a taste simulator that includes haptic feedback was created as far back as 2003 at the University of Tsukuba, and a more recent example of a  taste simulator came from  the National University of Singapore.  Work on systems for generating smells during virtual sessions has been advancing as well, just this year a Japanese startup showed off one of the latest odor emitting attachments for VR headsets.  So, should we add more dimensions to the figure until all five classic human senses are included? Or should we go with other approaches that define a multidimensional space of criteria for senses?  Another important question with respect to the definition of the reality-virtuality continuum is: Why are some senses, vision in particular, considered necessary while others are not?

Regardless of the number of dimensions we add to the definition, the reason our group uses the concept of mixed reality continuum is so that we don’t box ourselves into a limited vision or particular type of experience (like augmented reality is).  Virtual reality is one extreme of the virtuality continuum, where experiences happen in a controlled and purely synthetic environment. In that way, it is like TV, radio, or even a book: you experience it for a while and then return to reality. It’s a way to escape reality and experience non-existing or remote worlds, and thus it provides rich entertainment/education and is essential for designing new scenarios. But we believe hybrid virtual+physical immersive technologies will dominate for day-to-day, continuous uses.  Mixed reality will be used widely because it will be experienced as a consistent overlay in everyday life — Carte Blanche is one example. A second example is providing a persistent layer on top of reality that will ultimately allow AR to replace your smartphone, TV and other day-to-day devices.  The current wave of immersive technologies started with consumer VR headsets. Devices that are able to switch to-and-from VR/AR then become the ultimate step in this evolution. MR will become the dominant technology for everyday life, and we are here to help ensure Unity empowers creators to transfer their imaginations into experiences.

Forecasting

We envision that within the next five years a big leap in display resolution will push mixed reality towards more constant usage making it possible for people to start working directly in MR devices for long periods of time. The readability of text will be a major driving factor for the adoption of this usage.  Another factor for enabling constant usage will be the ability to easily switch between VR and AR.

MR is poised to become more social, and since sensor occlusion will still be an issue when multiple people are in a room, we can expect more mixed positional tracking with sensors; for example we could see static LiDAR on a chip for point cloud generation combined with inside-out tracking from MR devices.

Wireless systems and eye tracking will be the norm, as well as articulated hand tracking through untethered sensors, leading people to become unglued from their PCs. Instead, they will sit around a living room table playing virtual board games with friends.  And with a proliferation of virtual screens, they could sit on a couch watching e-sports, or lie on a bed playing video games. That means we have to change our mindset regarding MR app design to fit these new norms.

Autonomous digital actors (holobots, avatars) will populate mixed reality layers. For example,  voice assistants will probably have an emotive voice indistinguishable from a real human voice and have a virtual body to more directly interact with the user (maybe to sit next to them on a chair, walk ahead to show directions…etc.)

MR applications such as holobots will become ubiquitous in our daily lives.

What will our group be doing in 2017-2018?  We’ll be posting on a regular basis to let you know, but as a bit of a teaser, here are some of the interesting areas of MR that we believe should be improved and that we briefly touched on our presentation at SIGGRAPH 2017:

I. Applications are invisible, invisible things are forgettable

As we alluded to above, MR devices could eventually probably replace smartphones.  And with Apple’s ARKit available in all iOS devices, and Google’s ARCore in Android devices, the transition from smartphone to full fledged MR device has already begun.  Along with devices, the mobile application market has reached a maturity which precipitates a saturation problem. Typically, people use applications only once  after download , and over time they slowly forget about them.

“Gaming, the app category formerly known as “the darling of the mobile industry,” saw time-spent decline by 4% year-over-year. […]

After 10 Years, Mobile Reaches Moment of Truth.  As the iPhone celebrates its first decade, the mobile industry has grown into a dog-eat-dog world. The decelerating rate of growth could signal market maturity, saturation or simply the end of the app gold rush” (On Their Tenth Anniversary, Mobile Apps Start Eating Their Own)

The “1 content / service = 1 mobile application” mode of business is no longer possible in a world that has reached saturation and reusing the principle of an AR application to replace a flat application will not be the solution. All brands / companies / startups will try to find other vectors to interact with consumers. We’re looking into ways of making MR apps more engaging and less of a one-off occurrence.  As our colleague Mauricio Vergara points out, “With the adoption of immersive computing at scale, real time 3D graphics will become the way we interact with the world.”

II. Expanding mixed reality beyond current perceptions

Until now, MR applications were primarily designed as solitary and single-use experiments. Because of that, many of today’s user scenarios are somewhat limited and seem to lean towards being novelties rather than engaging solutions.

“The most thrilling mixed reality experience involves real-time, 3D mapping of the environment, which enables virtual objects to interact with surfaces and objects in the real world. For example, a computer-generated creature that can stand on a table — or hide behind it.” (Why Google and Apple will rule mixed reality By Mike Elgan)

To create the uses of tomorrow we must change perspective and ask ourselves interesting questions: does reality disappear when we close our eyes? In the future, devices could potentially no longer be used to generate a sandboxed reality, but rather devices will be a tool that connects users to a persistent layer of reality. For example, museums could create interactive captions around masterpieces, sports stadiums could display real time virtual FX above players in motion on the field, restaurants could display interactive menus on tables…etc. The possibilities are endless, and in this future, users just need to be in the right place to take advantage of the augmented layer.  Platforms like Google’s Visual Positioning Service (VPS) which brings quick and accurate indoor location mapping and understanding, will help provide seamless augmented experiences anywhere. The MR Research Group will look into creating tools that will help users and devs align their thinking with this new perspective.

III.  Future MR experiences should be more social

Many current MR experiences do not have a strong enough social component and end up being a lonely affair for both the user and observer; we want to help change that paradigm. There are some notable social VR apps that have sprung up this past year, but social augmented reality experiences are more difficult to create and are just in their infancy.  Often, even if an observer is in the same room as the participant, it is difficult for this observer to imagine the user’s experience because they typically can’t see anything.

Over the past few months, we have observed many augmented reality users during events; our conclusion is that they get bored, no longer interact with the wearer, and ignore them as if they were no longer part of their reality (here are some example videos). In a similar way, it is important to keep in mind that the popularity of VR does not come from people who try the VR HMD themselves, but rather from the people who demo it on YouTube with a “Mixed Reality View”. Seeing both the content and the user’s reaction is what generates the interest for the observer.  

Mark Billinghurst tells us augmented reality can bring empathic computing: “Systems that allow us to share what we are seeing, hearing and feeling with others”.  It is an amazing idea, like obtaining new superpowers, however, it comes at a cost. Increasing the social aspect naturally raises questions regarding privacy and freedom, which will be big issues for the future of immersive technologies regardless of the specifics of the implementation. Privacy and freedom must be considered by everyone involved: engineers, artists, designers, and policymakers alike. Since MR tech could be ubiquitous within a decade, discussions need to start now, and we will be contributing.

Social aspects of MR will be important for both personal and business uses.

IV. Game Authoring Evolution

The future of gaming and game authoring has never been so bright. MR brings with it a whole new era filled with new types of games and new ways to play them. Being Unity, we are in a unique position to affect that future.  Our group will help study how games can and will evolve in this new era of mixed reality.  How will games be played? When will they be played? How persistent will they be in regard to location and participation? How will they be created?  These are important questions, and mixed reality provides a sea of possibilities to answer them.  Part of our work will hopefully help guide Unity into that future.  That being said, Unity’s core strength is its developers community and our group’s work will help influence the creation of a concrete plan for our developers to be successful in this new world.  We will be involved with other groups within Unity on the creation of new and improved MR focused development tools that will help drive our user’s success.

In part 2 and 3 of this series we will explore the implications of the above issues and will dive into the design challenges we face with this new medium. Part 2 has been published and can be seen here. Part 3 can be found here.

Article contributors: Greg Madison, Lead UX/IxD Designer and futurist; Colin Alleyne, Senior Technical Writer; and Sylvio Drouin, VP – Unity Labs.  

 

コメント受付を終了しました。

  1. This is very cool,you are smart . AR/MR/VR/*R discards this leverage. Time and place come roaring back with the insistence of now! now! now! I must be in Beijing. I must be in Paris. I must be in Brooklyn. Steve Job’s lovely computing metaphor of a “bicycle for the mind” is completely absent. There is no mechanical advantage. Everything is 1 to 1. Every input must map directly to a specific output.

  2. Douglass Turner

    10月 11, 2017 2:52 pm

    There is a big, hairy, fundamental “bug” underlying this MR/VR (any R really) vision that cannot be ignored and will put a significant damper on the importance of this tech:

    The requirement of presence. In both time and space (location).

    What do I mean? The big idea of the Web was: time and location no longer matter. This enables undreamed of reach and leverage. Vast resources are effortlessly marshalled via a WIMP interface on either desktop or smartphone. Everything is decoupled: sender/receiver need not be present in time or space.

    iPhone took all of this to an entirely other level by providing an deeply intimate U/X with an almost unconscious user interaction and very low cognitive load.

    AR/MR/VR/*R discards this leverage. Time and place come roaring back with the insistence of now! now! now! I must be in Beijing. I must be in Paris. I must be in Brooklyn. Steve Job’s lovely computing metaphor of a “bicycle for the mind” is completely absent. There is no mechanical advantage. Everything is 1 to 1. Every input must map directly to a specific output.

    Step back from the XR hype parade and it all looks very retrograde and rather backwards and primitive.

    Discuss?

    1. Dioselin Gonzalez

      10月 21, 2017 2:02 pm

      Hi Douglass! I answered your comment here. Thanks

  3. How can I get one of those Unity t-shirts like the peeps in the photo??

    1. Dioselin Gonzalez

      9月 7, 2017 7:16 pm

      These are employee shirts :) So (shameless plug) check https://careers.unity.com/ and join us!
      Kyle is wearing our pride celebration shirt, and Jono has the one from GDC.

  4. Philip J. Maschke

    9月 6, 2017 2:15 pm

    Interesting. A group just as I would like to join one day myself, however at this point, I have to work on peer reviewed research that is rather difficult in this new area of technological development. My research interest is on the other spectrum of what you call MR, as I want to understand what the dangers and pitfalls will be with this new immersive technology. Just as the world did not really get smarter with the invention of the internet, I am wondering how living in your own (virtual) world is going to affect ones belief’s and thoughts.
    As a PhD student at KU, I would love to find a way to collaborate with you as I am working towards my dissertation. I will get in touch with you shortly though :)

    1. Dioselin Gonzalez

      9月 7, 2017 7:11 pm

      Hi Philip, your dissertation topic is fascinating. In case it’s not already on your reading list, Rainbows End is good inspiration and thought provoking. One of the topics it touches is how individuals’ may lose touch with the physical world when everything is digital and illusory.

  5. Insightful article!
    It might be helpful to investigate how an MR operating system would look like and what requirements it has first before diving into gaming itself.

    Maybe future apps/games will have to be web-based?
    Maybe apps will have to respond to how much physical space I allow them to occupy?
    Maybe apps won‘t be apps but filters for the real world?
    What would all that imply for my game design?

    So many questions! Can‘t wait to hear more from your research in part two and three.

    1. Dioselin Gonzalez

      9月 6, 2017 7:41 pm

      Lots of interesting questions! And yes, we should all look at the big picture of what is needed for mixed reality to become part of our everyday lives and not for specific uses or occasions. Stay tuned, thanks