Search Unity

For years, I’ve been working on a challenging question: what if you could create your game without worrying about building complex animation graphs? What if all that time setting up nested state machines, manually creating those transitions points, and matching poses could be spent instead on the art itself?

We set out to create a radically different animation system — one that provides motion synthesis that wouldn’t need to rely on any of these superimposed structures like graphs or blend trees. This technology could remove manual labor and free up animators to focus on what they love: creating beautiful artwork.

At the Unite Berlin 2018 keynote this week, we announced Kinematica — a brand-new experimental package coming to Unity later this year, developed by Unity Labs.

Unlike traditional systems, Kinematica retains everything in a single library, and it decides — automatically and in real-time — how to combine tiny fragments from that library into a sequence that matches the controller input, the environmental context, and any gameplay requests.

Animation quality was a key pillar when designing this system, so we never alter the original animations or quality of the input data that you as an animator provide. And remember, you still get to decide (based on your own game design) what the animations of your characters are. You even have the power to tweak how the animations interact with the environment.


It’s crucial for systems like this to be able to scale. So, to push and ‘stress test’ the technology, we decided to try it with mocap data — usually the largest and most unstructured animation data you could use. We rented out a mocap studio, hired a stuntman specializing in parkour, and let him run around the gym for a few hours. This resulted in 70,000 poses and over 45 minutes of animation data — and Kinematica successfully created smooth, dynamic animations using this data.

The benefits of this system include a higher-quality, polished look; versatility because numerous variations can be determined from the same data set; and of course, not having to manually map out any animation graphs — meaning you can iterate faster and focus on your art.

Stay tuned this summer for more updates on when you can give Kinematica a try yourself!

See more from Unity Labs.

35 Comments

Subscribe to comments

Leave a reply

You may use these HTML tags and attributes: <a href=""> <b> <code> <pre>

  1. I think Kinemetica ‘s most important keyword is unstructured motions.
    I saw dog player . Just I play with my dog , Maybe I get dog quadpad animations.

  2. Any news? I’m dying to try this! (Should make AI animation a lot easier, I hope.)

  3. Looks so fantastic! Wondering about the dynamic cloth you guys are using, what is that? Is any kind of real time dynamic bone with collide or what technics are you guys using? Been using the old Aseet Dynamic Bone for years but looking for something more advance. Any one!?

  4. I’m making some high quality creatures for the asset store and I was wondering…

    How does this affect which animations I decide to make? With root motion you need to create zillions of variants for movement to get the right look for turning and strafing and walking backward, curving backward, rotation in place…. then do it again for running and then again for an agro mode etc. I’m wondering just what kinematica changes in this process… I wonder if I’m wasting time producing art for a system (mechanim) thats about to be replaced.

    And I have yet to see an answer– Is this just for the humanoid rig or is this adaptable to other generic rigs with perhaps many more legs?

    To tell you the truth I don’t really know what kinematica is doing. Smarter blends…. and this allows me to not use transition animations… or perhaps I can just give it a couple in between poses and let kinematica do the rest of the animation to save huge time. Perhaps it helps with impact/pain/hit animations in some way thats better than an additive flinch. Perhaps it helps the creature/player look around with head movement…I have no idea. But my imagination says if it can help save animation time, its the best invention for Unity yet. Is there any way to learn more sooner than later, or participate?

    1. Hi Herb,

      you don’t need to provide more animations than you’d need to create for mecanim. But instead of arranging animations in a parametric blend node you just provide the animations without any structure. Transition animations will be automatically be picked up without you creating an intermediate state. Just imagine you create the same content you’d create for a mecanim based system, but you no longer need to create state machines, transitions or blends.

      Kinematica doesn’t use the humanoid rig, it works with any skeleton. You can animate any kind of creature with it.

      Kinematica does not create blends and it does not “create” animations, it uses the animations you feed into the system and produces poses frame-by-frame based on a description of how you want your character to move. Imagine you draw a line on the ground and as a result you get an animated character using the animations you provided.

      Unfortunately there’s not much more information available right now, we’re working hard to have an update as soon as possible.

  5. When time coming this Demo? It’s seems incredible.

  6. Machine learning is very important to the animation. because many types of animation are there in machine learning which is very important to me. you can also go to the link https://chathelp.org/gmail-support/ which provides the best support.

  7. Sounds like a very useful tool. Can’t wait to try it out!

  8. You should replace the demo video. I went to see the Unite video to understand what this really was, the video you posted seem just a regular motion capture animation.

    1. Hi Chris,

      you’re absolutely right, thanks for your feedback.

  9. That’s really some interesting news. I have two concerns though, First of all I don’t know if the system will be modifiable, will we have access to the source code ? and second, what if we want to mix Kinematica and another system ? for example something not handled by kinemtatica and only by the animator controller, will I be able to stop kinematica switch to the animator controller and then switch back ? If kinematica won’t be open source, it would be helpful if it will have a kinematica playable like the animator controller playable so we can swtich to other systems. Thanks

    1. Hi John,

      Kinematica has been implemented as a playable in order for it to integrate well with the rest of the animation system, so it is up to you to switch between Kinematica and other operations. Although the goal is to not have to switch between different systems. Kinematica allows to inject specific animations if you wish to do so. Also, it is not a black box implementation, you’ll have fine grained control over the movement and animation poses.

  10. It’s not motion matching. It’s far better (if anyone is reading).

  11. good post.thank you

  12. how long we have to wait to use this 3 to 4 years from now ?

    1. We’re working hard to have you get your hands on Kinematica by the end of this year.

  13. Is this what is happening with Kinematica, is it similar to what is happening in this GDC talk? https://www.youtube.com/watch?v=KLjTU0yKS00

    1. Kinematica will indeed feature a full body inverse kinematics system that can be used to procedurally modify the generated animation frames, similar to the talk that you linked in your comment.

      1. Woohoo! You guys are just too good!!! Another question, just to make sure I understand: Will you be able to train the system from key frame animation not just motion capture? If the answer is yes, that would allow for some stylized animation to be procedurally generated.

      2. Will this work for non-humanoid setups like in the video? My artists have some… interesting creature designs.

  14. Will this support authored animation data for stylized / NPR games?

    1. Kinematica supports motion captured animation data as well as handcrafted animation clips. But we haven’t yet tested the system on highly stylized animation data and presumably there will be certain cases where Mecanim will be the better tool for the job.

  15. Now THIS is what i’m talking about! Such an incredible potential feature. And something that’s really helped the AAA industry. I can’t wait to use it!

  16. asus router support

    June 21, 2018 at 2:30 am Reply

    Hey, I read the post it was great for all people. Most of the people they do not know about it. Nowadays most of the place teach animation with machine learning. I do all of the course. So if you want to know it how both are very important then I suggest a site from the site you know about it in details. https://chathelp.org/asus-router-support/

    1. How is assus router support teach people about machine learning?

  17. I guess it won’t, but Kinematica isn’t going to replace Mecanim , right? These two systems can coexist and even talk to each other (adding additive animations on top of Kinematica, etc…)

    1. Motion Matching is mainly for locomotion so every other type of animation would still need to be animated through conventional means so for sure Mecanim won’t be affected. Regardless, it would really be a bummer if it’s not open source

      1. Kinematica is not Motion Matching and has been built from the ground up to be able to support any kind of movement – not just locomotion. We already demonstrated Parkour with stable and precise environment contacts. Stay tuned for more demos coming later this summer where we will be showing a full climbing system, melee combat and physically simulated characters.

        1. So… Is Kinematica going to replace Mecanim?

      2. Pierre-Paul Giroux

        June 21, 2018 at 7:21 pm Reply

        Our prototype has .Kinematica embeded into a Playable node. This means we can use this in conjonction with our StateMachine, with Timeline etc..

        1. Awesome! So Kinematica is using the Playable API? Great news! So you can, for instance, use the Kinematica as a node and use the Playable Graph to play an Animator blending the weight between them, right?

  18. How is that different from a simple “motion matching”? Isn’t artificial intelligence literally overkill here?

    Simple Motion matching works by finding the closest posture to a given pose in a specific animation, you can brute force real time no problem at simple cost, even with look ahead and blending comparison.

    1. Kinematica is different from “Motion Matching” in that it supports any kind of movement – not just simple locomotion. As an example we demonstrated during our keynote demo various Parkour moves with stable and precise environment contacts.

      Also, just searching for matching poses in a brute-force fashion at runtime does not scale and would be prohibitively expensive. Kinematica executes in a fraction of a millisecond regardless of the amount of animation data.

  19. This is really fantastic news, can’t wait to hear more of it. I have one question though, what will the system be like ? open source ? or a closed system like the animator controller ?

    1. Kinematica is not Motion Matching and has been built from the ground up to be able to support any kind of movement – not just locomotion. We already demonstrated Parkour with stable and precise environment contacts. Stay tuned for more demos coming later this summer where we will be showing a full climbing system, melee combat and physically simulated characters.