Search Unity

Animating inside VR: Mixing motion capture and keyframes

July 24, 2017 in Engine & platform | 7 min. read
Share

Is this article helpful for you?

Thank you for your feedback!

In the Unity Labs Authoring Tools Group, we explore the future of content creation, namely around how we can use XR to make 3D content creation faster, easier, and more approachable.

We’ve shipped EditorVR, which brings Unity into the headset and minimizes VR development iteration time, and we’re developing Carte Blanche, opening virtual world and experience design to a new audience of non-professional creators. We take an experiment-driven approach to our projects, since XR software is still far from having concrete standards, and we believe that there’s still far more to discover than there is known.

Since we first started working on these XR authoring tools, one topic we've been very interested to tackle is animation tools: how could we use virtual objects to quickly sketch out a sequence, and what could XR do to make 3D animation more accessible to everyone?

Goal: Keep it small & focused; build off what others have done

An animation tool could easily constitute a year-long project, but we explicitly set out to make this one quick: one developer, one month. The goal was to test out UX paradigms that can work their way back into our larger immersive projects.

We're big fans of the VR animation app Tvori. In Tvori, the user builds out a scene, and then records animation per-object with real-time motion capture from the controllers. We've loved playing with it, and with many of us having experience in flatscreen animation tools (Maya, After Effects, Flash, etc), we were hungry to also have editable splines and a full-blown keyframe/track-based timeline. So those specific features were our focus in building this project.

Our hybrid solution

In our project, the user starts in an empty grid scene, with a simple palette of objects and a blank timeline in front of them. They can assemble the scene from the objects, and then ‘arm’ the recording, so that motion capture will begin once they start moving an object. When they release the object, recording stops, the new motion curve is visible, and the timeline shows keyframes for the beginning and end of the motion. The user can reach into the motion curve and adjust its points with a smooth falloff, and adjust the keyframes on the timeline to speed up or slow down the entire motion.

What we learned

User feedback and visual polish is everything.

A little bit goes a long way

It's tempting when building a new UI to just build it out of white cubes, or to think of user feedback (visual changes, sounds, haptics) as "just" polish. But that feedback and visual polish is hugely important, and even a little bit goes a long way in making a UI discoverable, meaningful, and testable. If we have to explain to a new tester what each button does, then we're not testing the usability of the system, and moreover we're forcing the user to keep a complicated mental model in their head, taking up bandwidth that they should be spending on actually using the tool.

In this project, any time we introduced a new UI element, we'd make sure to take a minute to actually model out a basic icon, making sure that testers found UI elements self-explanatory.  We don’t think of it as "polishing the art" (it was still programmer art, after all!), but just making something that early testers actually can use and give meaningful feedback on.

Give as much feedback as possible: haptic, aural, visual

Ultimately, we find that with giving the user feedback, we should use every outlet we have. If the user is hitting a button, it should light up, make a noise, and vibrate the controller. This doesn’t just apply to the moment of impact, but at every stage of the interaction: we have hover start/stay/stop, and attach start/stay/stop, so could potentially have at least six pieces of interaction feedback per element. We try to at least provide feedback for hover, attach, and end/confirm. In 2D UI, you often get these feedback patterns for free, but in XR, you have to build them from scratch.

To help think through what feedback to give, we drew out a spreadsheet of each state (default, hover, selected, confirmation) and each element (animatable object, motion curve, keyframe, each button), so we could identify which elements were or were not reflecting different interactions.

Grab vs select

We've tried a few different approaches for selection versus manipulation of objects in our authoring projects, and this time made the most explicit distinction yet: the primary trigger (Touch index trigger / Vive trigger) will select an object, and the secondary trigger (Touch hand trigger / Vive grip) will manipulate it. This turned out to work really well in this project, since everything you can select can also be moved, and we wanted to avoid accidentally moving anything.

EditorVR has a similar concept, where you can move Workspaces using the secondary trigger and interact with them with the primary trigger, and select objects at a distance vs. manipulate them directly (both using the primary trigger).

Keep UI close to the user, and let them summon it

When designing 2D interfaces, we can simply stick a UI control in the upper-left corner of the window, and be done with it. Not so in VR. Especially on a room-scale setup, the user could start the app from anywhere in the room. Some apps will simply plant their UI in the center of the tracking volume, which often means the user will start out on the wrong side of the interface, or worse, inside it. The solution that we've found works well in each of our authoring tools is to start any UI within arms' reach of the user, and, if they walk away, let them "summon" the panel back to an interactable range.

Give your UI some physicality

Flatscreen interfaces generally don't have inertia, and it can be surprising and even unpleasant when they do. A mouse is a superhuman input device for manipulating points and data, and hardly ever thought of as a literal representation of your physical body.

In VR, the exact opposite is true: since we do very much embody tracked input devices, objects must have inertia and physicality. If we grab an object in VR and give it a hard push, it's very jarring for the object to suddenly stop in its tracks when we let go. This is obvious when we're talking about throwing a virtual rock, but less clear in the case of interface panels.

But in our experiments, and using other VR apps that do or don't apply physicality to their UI, we find that it’s just as essential. Of course there's a balance to strike, because you probably don't want your UI to clatter to the ground after you throw it. The solution we're using in the Animator is a simple non-kinematic, non-gravity-affected Rigidbody with some drag; you can give it a good push and it'll float away, but also slow down quickly and stay close enough that you won't have to go hunt down where all your UI has floated off to. To be exact, we use Drag = 8, Angular Drag = 16 (because accidental rotation when you release a panel is very annoying), which makes for a pretty subtle, but nice, effect.

Wrapping it up

There’s always more to do and explore, especially on a project intentionally kept small in scope; this one’s no exception. We’d love to experiment with meaningful depth in the timeline interface, both for element interactions and animation-specific uses. We’re curious to try moving away from the central timeline workspace mentality and instead have smaller individual timelines attached to each object. We have more questions about how to smoothly combine both motion capture and strict keyframe timing.

But, even more than all of that, we’re eager to apply what we’ve learned so far to our other projects, and to continue experimenting with new ideas. Most of these remaining curiosities and questions will very likely make a comeback in the next project.

We think animation tools in XR are a genuinely useful topic, and we’re eager to see what comes out of the community. In the meantime, check out our build. We hope you enjoy playing with it, and are able to take and expand upon some of these designs in your own projects.

We may open-source the project in the future, depending on community interest. In the meantime, if you’re interested in building on this tool, collaborating, or have some feedback for us, get in touch at labs@unity3d.com!

July 24, 2017 in Engine & platform | 7 min. read

Is this article helpful for you?

Thank you for your feedback!