Search Unity

Announcing Kinematica: Animation meets machine learning

, 20 июня, 2018

For years, I’ve been working on a challenging question: what if you could create your game without worrying about building complex animation graphs? What if all that time setting up nested state machines, manually creating those transitions points, and matching poses could be spent instead on the art itself?

We set out to create a radically different animation system — one that provides motion synthesis that wouldn’t need to rely on any of these superimposed structures like graphs or blend trees. This technology could remove manual labor and free up animators to focus on what they love: creating beautiful artwork.

At the Unite Berlin 2018 keynote this week, we announced Kinematica — a brand-new experimental package coming to Unity later this year, developed by Unity Labs.

Unlike traditional systems, Kinematica retains everything in a single library, and it decides — automatically and in real-time — how to combine tiny fragments from that library into a sequence that matches the controller input, the environmental context, and any gameplay requests.

Animation quality was a key pillar when designing this system, so we never alter the original animations or quality of the input data that you as an animator provide. And remember, you still get to decide (based on your own game design) what the animations of your characters are. You even have the power to tweak how the animations interact with the environment.

It’s crucial for systems like this to be able to scale. So, to push and ‘stress test’ the technology, we decided to try it with mocap data — usually the largest and most unstructured animation data you could use. We rented out a mocap studio, hired a stuntman specializing in parkour, and let him run around the gym for a few hours. This resulted in 70,000 poses and over 45 minutes of animation data — and Kinematica successfully created smooth, dynamic animations using this data.

The benefits of this system include a higher-quality, polished look; versatility because numerous variations can be determined from the same data set; and of course, not having to manually map out any animation graphs — meaning you can iterate faster and focus on your art.

Stay tuned this summer for more updates on when you can give Kinematica a try yourself!

See more from Unity Labs.

40 replies on “Announcing Kinematica: Animation meets machine learning”

So by saying this isn’t the brute force way of motion matching, that leads me to assume that it is more along the lines of this example? —->

If not this kind of work flow i would love to hear and or see how it is implemented. And what kind of database it uses to pulls all these animations and the kind of network of how it knows what animation to placed. I have many more questions on how you would use this in states such as combat, locking-on, etc. But i will wait for more information when it is released.

I think Kinemetica ‘s most important keyword is unstructured motions.
I saw dog player . Just I play with my dog , Maybe I get dog quadpad animations.

Looks so fantastic! Wondering about the dynamic cloth you guys are using, what is that? Is any kind of real time dynamic bone with collide or what technics are you guys using? Been using the old Aseet Dynamic Bone for years but looking for something more advance. Any one!?

I’m making some high quality creatures for the asset store and I was wondering…

How does this affect which animations I decide to make? With root motion you need to create zillions of variants for movement to get the right look for turning and strafing and walking backward, curving backward, rotation in place…. then do it again for running and then again for an agro mode etc. I’m wondering just what kinematica changes in this process… I wonder if I’m wasting time producing art for a system (mechanim) thats about to be replaced.

And I have yet to see an answer— Is this just for the humanoid rig or is this adaptable to other generic rigs with perhaps many more legs?

To tell you the truth I don’t really know what kinematica is doing. Smarter blends…. and this allows me to not use transition animations… or perhaps I can just give it a couple in between poses and let kinematica do the rest of the animation to save huge time. Perhaps it helps with impact/pain/hit animations in some way thats better than an additive flinch. Perhaps it helps the creature/player look around with head movement…I have no idea. But my imagination says if it can help save animation time, its the best invention for Unity yet. Is there any way to learn more sooner than later, or participate?

You should replace the demo video. I went to see the Unite video to understand what this really was, the video you posted seem just a regular motion capture animation.

That’s really some interesting news. I have two concerns though, First of all I don’t know if the system will be modifiable, will we have access to the source code ? and second, what if we want to mix Kinematica and another system ? for example something not handled by kinemtatica and only by the animator controller, will I be able to stop kinematica switch to the animator controller and then switch back ? If kinematica won’t be open source, it would be helpful if it will have a kinematica playable like the animator controller playable so we can swtich to other systems. Thanks

Now THIS is what i’m talking about! Such an incredible potential feature. And something that’s really helped the AAA industry. I can’t wait to use it!

Hey, I read the post it was great for all people. Most of the people they do not know about it. Nowadays most of the place teach animation with machine learning. I do all of the course. So if you want to know it how both are very important then I suggest a site from the site you know about it in details.

I guess it won’t, but Kinematica isn’t going to replace Mecanim , right? These two systems can coexist and even talk to each other (adding additive animations on top of Kinematica, etc…)

Motion Matching is mainly for locomotion so every other type of animation would still need to be animated through conventional means so for sure Mecanim won’t be affected. Regardless, it would really be a bummer if it’s not open source

Our prototype has .Kinematica embeded into a Playable node. This means we can use this in conjonction with our StateMachine, with Timeline etc..

Awesome! So Kinematica is using the Playable API? Great news! So you can, for instance, use the Kinematica as a node and use the Playable Graph to play an Animator blending the weight between them, right?

How is that different from a simple «motion matching»? Isn’t artificial intelligence literally overkill here?

Simple Motion matching works by finding the closest posture to a given pose in a specific animation, you can brute force real time no problem at simple cost, even with look ahead and blending comparison.

This is really fantastic news, can’t wait to hear more of it. I have one question though, what will the system be like ? open source ? or a closed system like the animator controller ?

Comments are closed.