Search Unity

In the Unity Labs Authoring Tools Group, we explore the future of content creation, namely around how we can use XR to make 3D content creation faster, easier, and more approachable.

We’ve shipped EditorVR, which brings Unity into the headset and minimizes VR development iteration time, and we’re developing Carte Blanche, opening virtual world and experience design to a new audience of non-professional creators. We take an experiment-driven approach to our projects, since XR software is still far from having concrete standards, and we believe that there’s still far more to discover than there is known.

Since we first started working on these XR authoring tools, one topic we’ve been very interested to tackle is animation tools: how could we use virtual objects to quickly sketch out a sequence, and what could XR do to make 3D animation more accessible to everyone?

Goal: Keep it small & focused; build off what others have done

An animation tool could easily constitute a year-long project, but we explicitly set out to make this one quick: one developer, one month. The goal was to test out UX paradigms that can work their way back into our larger immersive projects.

We’re big fans of the VR animation app Tvori. In Tvori, the user builds out a scene, and then records animation per-object with real-time motion capture from the controllers. We’ve loved playing with it, and with many of us having experience in flatscreen animation tools (Maya, After Effects, Flash, etc), we were hungry to also have editable splines and a full-blown keyframe/track-based timeline. So those specific features were our focus in building this project.

Our hybrid solution

In our project, the user starts in an empty grid scene, with a simple palette of objects and a blank timeline in front of them. They can assemble the scene from the objects, and then ‘arm’ the recording, so that motion capture will begin once they start moving an object. When they release the object, recording stops, the new motion curve is visible, and the timeline shows keyframes for the beginning and end of the motion. The user can reach into the motion curve and adjust its points with a smooth falloff, and adjust the keyframes on the timeline to speed up or slow down the entire motion.

What we learned

User feedback and visual polish is everything.

A little bit goes a long way

It’s tempting when building a new UI to just build it out of white cubes, or to think of user feedback (visual changes, sounds, haptics) as “just” polish. But that feedback and visual polish is hugely important, and even a little bit goes a long way in making a UI discoverable, meaningful, and testable. If we have to explain to a new tester what each button does, then we’re not testing the usability of the system, and moreover we’re forcing the user to keep a complicated mental model in their head, taking up bandwidth that they should be spending on actually using the tool.

In this project, any time we introduced a new UI element, we’d make sure to take a minute to actually model out a basic icon, making sure that testers found UI elements self-explanatory.  We don’t think of it as “polishing the art” (it was still programmer art, after all!), but just making something that early testers actually can use and give meaningful feedback on.

Give as much feedback as possible: haptic, aural, visual

Ultimately, we find that with giving the user feedback, we should use every outlet we have. If the user is hitting a button, it should light up, make a noise, and vibrate the controller. This doesn’t just apply to the moment of impact, but at every stage of the interaction: we have hover start/stay/stop, and attach start/stay/stop, so could potentially have at least six pieces of interaction feedback per element. We try to at least provide feedback for hover, attach, and end/confirm. In 2D UI, you often get these feedback patterns for free, but in XR, you have to build them from scratch.

To help think through what feedback to give, we drew out a spreadsheet of each state (default, hover, selected, confirmation) and each element (animatable object, motion curve, keyframe, each button), so we could identify which elements were or were not reflecting different interactions.

Grab vs select

We’ve tried a few different approaches for selection versus manipulation of objects in our authoring projects, and this time made the most explicit distinction yet: the primary trigger (Touch index trigger / Vive trigger) will select an object, and the secondary trigger (Touch hand trigger / Vive grip) will manipulate it. This turned out to work really well in this project, since everything you can select can also be moved, and we wanted to avoid accidentally moving anything.

EditorVR has a similar concept, where you can move Workspaces using the secondary trigger and interact with them with the primary trigger, and select objects at a distance vs. manipulate them directly (both using the primary trigger).

Keep UI close to the user, and let them summon it

When designing 2D interfaces, we can simply stick a UI control in the upper-left corner of the window, and be done with it. Not so in VR. Especially on a room-scale setup, the user could start the app from anywhere in the room. Some apps will simply plant their UI in the center of the tracking volume, which often means the user will start out on the wrong side of the interface, or worse, inside it. The solution that we’ve found works well in each of our authoring tools is to start any UI within arms’ reach of the user, and, if they walk away, let them “summon” the panel back to an interactable range.

Give your UI some physicality

Flatscreen interfaces generally don’t have inertia, and it can be surprising and even unpleasant when they do. A mouse is a superhuman input device for manipulating points and data, and hardly ever thought of as a literal representation of your physical body.

In VR, the exact opposite is true: since we do very much embody tracked input devices, objects must have inertia and physicality. If we grab an object in VR and give it a hard push, it’s very jarring for the object to suddenly stop in its tracks when we let go. This is obvious when we’re talking about throwing a virtual rock, but less clear in the case of interface panels.

But in our experiments, and using other VR apps that do or don’t apply physicality to their UI, we find that it’s just as essential. Of course there’s a balance to strike, because you probably don’t want your UI to clatter to the ground after you throw it. The solution we’re using in the Animator is a simple non-kinematic, non-gravity-affected Rigidbody with some drag; you can give it a good push and it’ll float away, but also slow down quickly and stay close enough that you won’t have to go hunt down where all your UI has floated off to. To be exact, we use Drag = 8, Angular Drag = 16 (because accidental rotation when you release a panel is very annoying), which makes for a pretty subtle, but nice, effect.

Wrapping it up

There’s always more to do and explore, especially on a project intentionally kept small in scope; this one’s no exception. We’d love to experiment with meaningful depth in the timeline interface, both for element interactions and animation-specific uses. We’re curious to try moving away from the central timeline workspace mentality and instead have smaller individual timelines attached to each object. We have more questions about how to smoothly combine both motion capture and strict keyframe timing.

But, even more than all of that, we’re eager to apply what we’ve learned so far to our other projects, and to continue experimenting with new ideas. Most of these remaining curiosities and questions will very likely make a comeback in the next project.

We think animation tools in XR are a genuinely useful topic, and we’re eager to see what comes out of the community. In the meantime, check out our build. We hope you enjoy playing with it, and are able to take and expand upon some of these designs in your own projects.

We may open-source the project in the future, depending on community interest. In the meantime, if you’re interested in building on this tool, collaborating, or have some feedback for us, get in touch at!

14 코멘트

코멘트 구독

코멘트를 달 수 없습니다.

  1. Quite into this, too, i worked on some such animation tools a while ago, too:
    and yeah, lots more interesting things to explore there, keep it up.

    1. Whoa, great work Ugur! Looks like a robust tool, and like you grappled with a lot of the same questions & concepts we did. I see that in yours, adjusting the length of the curve will also adjust the total time of the animation, so you’re keeping a constant speed across the distance. Because we exposed the timeline in our tool, we decided not to adjust timing when adjusting the curve, so editing the curve will make the velocity faster or slower in that region. Pros and cons to both approaches, and still lots of open questions. Feel free to get in touch if you’d like to share notes :)

  2. Mattheiu Brooks

    8월 1, 2017 9:44 오후

    Great stuff!!! Can not wait till the XR foundation toolkit is released !! Would it be possible for me to get the beta of it? Here is a video I recorded messing around with Animator XR:

    1. Mattheiu, awesome!! Thanks for sharing the video, great to see it in use! We’d love to hear any feedback you have on using it; feel free to drop it here or at XRFT is coming soon, we can’t wait to share it :)

  3. Cool stuff! One thing I wonder about though: You state you want to build off what others have done to keep it really focused but you still implement lots of stuff like buttons yourself. Why not use one of the amazing free frameworks like VRTK for these aspects?

    1. Thanks Robert! This project uses Unity’s upcoming XR Foundation Toolkit, which handles all the core interaction logic, so none of that was re-implemented for this project. The specific feedback (making the record button do its highlight / grow / pulse, for example) is all we implemented anew here.

  4. source please !

    1. Hey Erik! This project is built using our upcoming XR Foundation Toolkit, which is not yet released — once it is, then we should be set to release source.

  5. Mattheiu Brooks

    7월 24, 2017 9:40 오후

    Been using Tvori- as you can see here: but I’ve been anxious to use something like it but the ability to bring in my own rigged models. THIS IS IT!

    1. Thanks Mattheiu! Great Tvori work! :D We’re excited about being able to import & work with rigged models too, but for now that’s out of the scope of what we’ve done with this project.

  6. Robert Cummings

    7월 24, 2017 6:01 오후

    Fun toys, but I don’t see anything replacing traditional workflows – even for VR at present. Please keep up the research though, very inspiring :)

    1. Thanks Robert! We agree, at least about devs who currently create animation with flatscreen tools — it’ll be a while before the workflows they’re familiar with are better in VR. Our main focus with projects like this, other than experimenting with UI concepts, is to build something that can bring new creators in. But I think we’ll soon be at a point that existing devs can also start an animation in a tool like this and then refine it with Timeline / existing flatscreen tools.

  7. Dave Pentecost

    7월 24, 2017 5:07 오후

    I know it’s just starting, but XR on MacOS is now a thing (thanks for the builds that support it!) and will be important.

    Like EditorVR, we need MacOS versions of these tools. If you have preview versions for MacOS I would be happy to test them.

    1. Thanks Dave! XR on macOS is definitely on our minds and we’re looking forward to bringing all our authoring tools there. Can’t comment on timing now, but it’ll happen :)