Search Unity

UPDATED Dec 12, 2017: We have made significant changes to our plans for the input system. Please read our forum post for details.

In Input Team we’ve been working on designing and implementing a new input system. We’ve made good progress, and though there’s still a long way to go, we want to get you involved already now.

We’ve build a new foundation for working with input in a new way that we’re excited to show you, and we want to continue the development with your input on the existing and continued design.

Development process

The new input system will consist of two parts. The low-level part is integrated into the C++ core of Unity. The high-level part is implemented in managed (C#) code that will be open-source in the same way as e.g. the UI system.

Our development process for the new input system is to design and implement large parts of the high level system first. Initially this is based on top of the current input system that already exists in Unity. For now, we call this the input system prototype. Later, once the new low-level core is more mature, we’ll change the high-level part to be based on the new low-level system.

This means that the current high-level system (the prototype) lacks specific features that depend on the new low-level core, such as robust registration of connected and disconnected input devices while the game is running. However, many features of the design can already be used and tested, and this is particularly what we want early feedback on.

Working with you

Here’s how we’d like you to get involved:

  1. Learn about the design of the new system
  2. Try out the new input system prototype for yourself
  3. Tell us about your experience

Learn about the design of the new system

Input that works well for a plethora of different use cases is a surprisingly tricky matter. We’ve prepared some resources for you to learn about how the new design attempts to address this.

First of all, we’ve created this video covering the design of action maps and player management. It’s a good introduction to the new design.

We’ve also prepared a site with more information about the design, including a Quick Start Guide.

Head to the Experimental New Input System site to learn more.

Try out the new input system prototype for yourself

We have a project folder which contains the input system prototype as well as a demo project which uses it. This can be used with regular Unity 5.3 without needing a special build.

Download it from the Experimental New Input System site.

The input system prototype can be tested with other projects by copying the folder Assets/input-prototype into the Assets folder of another project. (Please create a backup of your project first.)

Tell us about your experience

What do you think about the design? How does it work (or not) for your project? Anything that’s confusing or unclear? For now we’re interested in discussion around the design of the system. We don’t need bug reports quite yet at this stage in development.

Head to the New Input System Forum to discuss the new input system.

We’re looking forward to working with you!

111 Comments

Subscribe to comments

Comments are closed.

  1. Oliver Jeskulke

    June 7, 2016 at 3:26 pm

    It would be supercool if the new system would support 2 mices for local multiplayer like in The Settlers. So far I only found some hacks and have to try it in Unity but I could sleep much better with official support :) Thanks!

  2. Just a note about the current version of the prototype. It doesn’t work nicely with hot reload.
    Hope it will be fixed. Thanks.

  3. Haris Ali Baig

    May 26, 2016 at 1:27 pm

    Hi,
    I have a problem, i am integrating game pad controls in my game, but it is not necessary that “Button A” has the value joystick button 0 in other different game pad. I have to integrate separately for different game pads.
    Are you working on that problem too ?

  4. I’d like to echo the comments here for some focus on touch input. Unity should have built-in gesture recognizers, preferably implemented in the same way as iOS:

  5. It would be awesome if this would not only cover the input, but also special generic features controllers have like rumble for example.

    if( Player.Playerinput.hasController )
    {
    Player.Playerinput.rumble();
    Player.Playerinput.Playstation4ControllerColor = Color.Red;

    //or invert controller axis during runtime
    Player.Playerinput.Playerinput.InvertAxis(Left.Stick); <- this may be possible already in the Prototype I don't know^^
    }

  6. It’s great that you guys are working on a better Input system, I’m loving this.

    I just wanted to bring some awareness to a feature that I incorporate frequently in my Unity projects and would love to see it supported natively by your Input system. I’m very interested in allowing the user to map different Modifier states to different actions. Here is an example of some action mappings:
    – CastSpell1 = Q
    – CastPetSpell1 = Shift-Q
    – QuestLog = Control-Q
    – AutoRun = Shift

    As you can see, there is an overlap of keys and modifiers, but they are definitely discreet. Not all games in the market right now allows the user to map their keys in this manner, but some do (one example is Tom Clancy’s The Division).

    This might be beneficial to controllers too, for example: if a user would like to use the Xbox One Elite Controller Paddles as a shift state for other buttons (A vs Paddle+A).

  7. We need more buttons support by joystick.
    Actually there is a built-in limit in the Input Manager set to 19 buttons.

  8. Howdy guys congrats on improving unity and you’re continued progress. A few questions first will the older input system still be usable also projects that are older or were created using the old input system will they have to be updated to keep working?

    1. While the details are still to be figured out, what is certain is that there has to be support for the old API or we’ll break just about every Unity project out there (plus invalidate tons of tutorials and articles). Whether this support is to be provided by just keeping the old API alive, by making it be a shim on top of the new system, by having the script updater rewrite API usages automatically, or by other means… that part isn’t clear yet. There are, however, good arguments that can be made in favor of just keeping the existing API going. At least for some time.

  9. Hello, thanks for good news;
    besides talking about great new architecture, multiplayer abilities etc. – will it please *finally* support the simple, ordinary touch input on Windows – the same way as it does on Android etc.?

    In fact I did not believe it’s missing when I firs tried to use it, read the forums and found various third party solutions tackling this deficiency…

    But of course thanks for the development effort anyway… :-)

    1. Rune Skovbo Johansen

      April 20, 2016 at 1:33 pm

      Touch input on Windows is in our plans, yes! It’s not in the prototype since it requires the new back-end.

      1. Great, thank you for the fast response and the good news! :-)

  10. This might be an unpopular opinion, but I’m not very fond of built-in complex high level systems. As something you can add to a project it would be awesome though. But as long as I’m able to make them from available information I’m satisfied, and I do miss having the information of “this input came from this device” so I can do custom tutorial messages, for example, so this new system already has me happier just for that. But having this entire system out of the box makes me a bit nervous. It will surely help a lot of projects, but the most unusual projects will always see these built-in high level systems as another obstacle, another thing to remove before finding the blank canvas. I’m sure that if I want to make a game that has the player holding a gamepad with one hand and a mouse with the other it will still be possible to do, but I might need to jump through some more hoops to do that than with the current system.

    I guess what I’m trying to say is, what attracted me to Unity in the first place was not the amount of features it had. If that were the case I would still be using UDK or Crytek. No, it was the blank canvas thing; how easy and flexible it is to build ANY game on it. In fact, even non games. And any new feature that brings a tiny amount of constraint and less flexibility is no good.

    But I know I shouldn’t be nervous! Unity clearly still retains its initial philosophy and I’ll always have the option of creating my own crazy input systems however the hell I want from basic exposed information. Right?

    1. Rune Skovbo Johansen

      April 20, 2016 at 12:16 pm

      It seems like you’re comparing with the current input system in Unity, and that is high level too and has a form of equivalent of actions too, like “Horizontal”, “Vertical”, “Fire” etc from the default setup. Only, its design is really bad for multiplayer, for getting info about which button an action came from, for rebinding at runtime, etc. We are solving those problems with the new design, but not really making something much more high level than the old system.

      But in any case, use of the high level system is optional. You will be able to access raw input events if you prefer that. Not only that, but the design consists of many layers, and you can choose between many options of how high level layers you want to use or not.

      Want to use ActionMaps but not the PlayerHandles and PlayerInput objects? You can do that, you’ll just need to keep track of all device assignment and event propagation yourself.

      Want to make use of the device standardization feature (so that different brand and models of gamepads have consistent mapping) but don’t want to use ActionMaps? Sure, you can query state directly from standardized device classes.

      Since the system will be open source like the UI system you can also customize it completely to your liking. People prefer to work in different ways, and we are doing our best to make sure there will be something for everyone.

  11. After 7 years waiting, more then 2000 feedback votes, its good that something happen, but this 1man 1month work you expose is barely enough or simple if someone check public roadmap such important thing “In-progress, timelines long or uncertain”.
    Although you said, Unity will,I feel so much reluctance about changes, need to be done in C++ side,
    especially exposing low level to C# API, make input system developer as myself to solution to marshall OS C++ in C#.
    For example to achieve capture of device connect and disconnect, you already, at least on high level, have covered in proposed API.
    (btw you can simulate with Input.GetJoystickNames() until you don’t have 2 devices with same name)
    Two main points.
    1)As Customer I need to remap and/or set InputManager.asset settings like sensitivity or gravity, and SAVE!.
    As far I can see proposed API can runtime remap but can’t save it. What you have in mind? PlayerPrefs?
    2)As Developer I want to connect device, create profile and map to actions by MOVING, CLICKING(long,double…)or combine
    Have current system have plans to support COMBOS and EDITOR MODEoffer device mapping?, instead of huge 10 screens popup to map keyboard key.

    I like ActionMapInput
    generator so you have intellisense in code, but it is boilerplate if you expect to use
    if(ActionMapInput.isHeld) //do something, ActionMapInput.isHeldUnityEvent
    so I can in editor subscribe handler to do something. UNITY EVENTS!
    Device profiles defently shouldn’t be HARDCODED but ASSETS with just data not creating device instances (new GamePad())

    It would be good as to have public class AnimationCurve : InputControl, which no matter actual input is discreet or analog will calculate some value according to curve.

    You need to track also connected device “PORT”, so if user change the device port not need to reassign.
    I hope the below code won’t be HardCoded so for example I won’t include (mouse in mobile dev target)
    go.AddComponent();
    go.AddComponent();
    go.AddComponent();
    go.AddComponent();
    go.AddComponent();

    I would be able to support any device ur C++ won’t support, but just adding go.AddComponent(); and the rest would work.
    Put the code on GITHUB we can track changes and progress.
    P.S Contact other other Input System developer not just “Patrick”, like Guavaman….

  12. First of all: it’s great to see that the Unity team is finally doing some work in this area – it was a major pain for years.

    I watched the youtube video. Basically, all planned features seem nice and dandy. However, in certain technical aspects, the implementation (or, at least the explanation in the video) doesn’t go far enough. There are two major areas where I would like to see more initiative:

    1) Control re-mapping. Especially in PC gaming, there’s always the obligatory options menu where the player can (usually freely) map keys to actions as he sees fit. For example, I might want to re-map movement from WASD to the arrow keys, in the game, at runtime. That was almost impossible to do with the old input system without considerable effort. This, of course, also involves checking for potential collisions, and saving the settings for the next game session (i.e. this is not just an in-memory thing, it has to be persisted). How does the new system handle this?

    2) On the coding side of things, I’ve found “if-then-else” cascades in the update function of a script that do nothing else than checking the current input state and calling functions to be tedious, hard to maintain and error prone. Instead, I would like to see an annotation-based system like this:

    [BindToInput(actionName=”jump”)] // call this method when the game object is active and jump input occurs
    public void jump(){
    // … player jump code goes here
    }

    This completely eliminates to check for input in the update function, as it is a “push”-based system, whereas currently, input is handled in a “pull” fashion.
    This can suit also more complex scenarios:

    [BindToInput(actionName=”fire”)]
    [KeyModifier(actionName=”alternative”)] // do not call this method unless the “alternative” fire key was held when the fire key was pressed
    [HoldForMillis(2000)] // do not call this method unless the fire button was held for 2 seconds
    public void fireMissile(){
    //… code for saving the game
    }

    1. Rune Skovbo Johansen

      April 18, 2016 at 10:31 am

      Rest assured we will have runtime input remapping! (And do have it to some extend in the prototype already – see the FAQ.)

      We do plan to have a push-based way of getting inputs. Your attribute based idea is interesting; something for us to consider. One disadvantage of that approach compared to for example delegates, it that isn’t not easy to change things dynamically. The simple syntax with no explicit callback registration in a Start method or similar is nice though.

      1. Hello again,

        thanks for the response. It’s good to see you being so close to the community!

        Regarding the attribute-style “push notifications”: Quite a long time ago I’ve implemented a prototype of such a system in Unity 4.x (or was it even 3.x? I can’t recall). I used to have a base class that extended from MonoBehaviour, and onAwake it did a method scan on “this.getType()”, checking for annotated methods, and automatically creating and registering delegates. So this CAN be done in a generic way. I would even go one step further and state that the system could be vastly more powerful if the C++ Unity Core were in control of it. At least for desktop, this C#-based system was highly efficient (with respect to performance), and allowed for great flexibility. You could even assign certain time-based input patterns to methods via attributes. For example, you could annotate one method as “single click on button A”, and another as “double click on button A”, and for any input sequence, only one of them would fire (i.e. a double click was not mis-interpreted as two single clicks). There were also “on button down” that only fired during the frame when the button indeed switched state from up to down, “on button held” for a given time, and “on button up”. These things are EXTREMELY hard to get right e.g. in an update function, and are very hard to encapsulate in helper classes, because a) they do require time-related information (thus having an internal state) and b) it must be ensured that they are called every frame, or the trigger won’t fire. Neither of these things you want to handle in every game object separately.

        If you need even more flexibility, you could assign an “InputProfile” attribute to an input-annotated method, that only allows the method to be called by input control if a given input profile is currently active. Given that the input is already based on abstract actions (“fire”, “jump”…), rather than concrete keys (A, B, X, Y…), I honestly doubt that there is a need for even more flexibility. I can’t think of any use case for that right now.

        All that being said, I absolutely do NOT want to see the pull-based input removed from the API. Much like the immediate mode GUI vs the retained mode GUI, it has its merits, and it should definitly stay. However, a push-based alternative seems desirable in a lot of scenarios, especially when it’s about time-based pattern detection.

      2. Benjamin Langerak

        June 8, 2016 at 11:41 pm

        Would love to see delegates being used for input actions to allow for more fine tuned dynamic control of input handlers. Would also be really nice to see input decoupled completely and injected in at start. Then we could more easily write tests to handle input types we may not be using the hardware for (ie if i’m writing Vive input code but on my laptop away from my Vive).

        I’ve written a very simple abstraction layer that needs lots of work but would love to see this integrated in unity. Currently I do things like #if UNITY_EDITOR _inputModel = new KeyboardInput(); #endif and have a bunch of that logic for various platforms. They also all push their dependencies via an interface so i can decouple it from the actual game logic. My system is not ideal but atleast allows for faster testing in editor. Would love to see a similar and more refined approach from unity!

  13. Linus Ahlemeyer

    April 15, 2016 at 4:23 pm

    I hope you will not ditch the current input system though??

    1. Nope, won’t happen. Backwards compatibility is a must have. The details of how that will work are still unclear but what’s certain is that even at the point where the new system is the goto solution, you’ll be able to load your project using the old API into Unity and it’ll work just fine.

  14. Be sure that I’ll do hard test.
    https://www.youtube.com/watch?v=Pir49v16aOQ

  15. I’m just in the beginning phase of learning Unity and the first thing I noticed was the weird way of handling input. I was having a hard time trying to accept it, but then as the weeks went by, this blog post made me smile again.

    Keep up the good work.

  16. I was wondering… Is it possible that the design philosophy behind this new input system is part of the bigger plan to change the way scripting works in Unity (which was very briefly mentioned at Unite Boston)? I’m referring to the fact that input is now handled by an actual input component instead of just being an “if()” check in Monobehaviours.

    Instead of having every single thing jammed into Monobehaviour, will we maybe see collisions being handled by events generated by Collider components, and rendering events (OnPostRender(), etc…) being handled by events generated by Renderer components, etc, etc…? I think this approach would be wonderful.

    1. While not directly related to the work on the input system, something along the lines of what you describe is being investigated as part of looking at components in a wider perspective.

      1. Awesome! Looking forward to hearing more about this

  17. First off, thank-you for the hardwork. Please ignore the trolls that don’t understand all the work that’s going on.

    You commented to Robert Cummings that you have a form of design mapping. Looking at the Design Overview site, it looks like you’re doing something similar to Gallant Games’ “InControl”. If you haven’t taken a look at what Patrick is doing, I’d highly recommend that you at least take a look. It’s been a great plugin.

    http://www.gallantgames.com/pages/incontrol-introduction

    1. Rune Skovbo Johansen

      April 15, 2016 at 3:08 pm

      Thanks Joel! Yeah, we know about InControl and have been getting input from Patrick on the new input system as well.

  18. Bug Report : When i enabled Auto graphic API in player setting Unity main fog (windows>lighting>fog) is working and global fog(optimized) don’t work !?
    and vice versa !
    they work in unity editor but in android devices no !?

    1. The best way to reports bugs to Unity is to use the “Help > Report a Bug” option in Unity.

      Make sure to follow the bug-report guidelines to increase the likeliness that your bug-report is looked at as soon as possible, see https://unity3d.com/unity/qa/bug-reporting for details.

  19. Are you deaf or what Unity?! even though mobile devices are the primary sector in your business you always put Touch and other mobile features on the backseat.
    SICK OF THAT!!

  20. Any plans for tvOS (Siri remote) support?

    1. Rune Skovbo Johansen

      April 13, 2016 at 3:35 pm

      The new input system will be the basis for input in Unity. Once it’s out, if Unity supports a platform, then the input of that platform will also be supported through the input system. And for things not supported out of the box, you or platform owners can extend it yourself. You can already extend it now in the prototype and implement tvOS support that way for now.

  21. Please stop and hire a guy who did “Rewired” to help, otherwise I feel like users will end up writing plugins on top of new input system just like they did with the old one.

  22. Completely unneeded!
    Please don’t change anything, we have to learn few hours daily to ketch all changes you do from fixes to new things.
    Please work on easier physics and realistic fluids and cloth.

    1. Agreed.

  23. Can we play on two keyboards and mices (one keybord and mouse for one player and second for second player)?

    1. Rune Skovbo Johansen

      April 13, 2016 at 1:54 pm

      See one of the other comments below (and its reply) where this was already covered.

    2. Ok thanks. Sorry for my lazy question.

  24. Remember that in games the players might want to customize the keys they want to use (eg. a player don’t want to fire using the ‘spacebar’ key, then he goes to the config screen and changes to the ‘A’ key). I hope this new system supports these changes in realtime smoothly.

    1. Rune Skovbo Johansen

      April 13, 2016 at 1:17 pm

      Yep, have a look at the FAQ.

  25. My Primary Platform is mobile devices , and crossplatform is working good , do i continue using that ,this will have have better control over crossplatfrom when it comes to touch controls.

  26. Once you get to fleshing out the new low-level core, can we please make sure to include raw mouse input? For most games this isn’t a big deal, but as part of a company running an FPS game, we get requests for this quite often, and haven’t got around yet to addressing it on our own yet.
    I’m referring to the “mouse smoothing/acceleration” that Windows and other OSes may have, and to being able to bypass that without having to suggest to our players to manually disable that setting in the OS.

    Glad to see this system finally being updated. Especially the part about runtime re-binding ;)

  27. Seems promising.

    With the current system, gamepad inputs are not received if the game view is not focused. This is fine for a release version of the game, but it’s not convenient at edition time when you need to debug. Triggering a breakpoint removes the focus of the game view and potentially change the state of what you want to debug. Or even just to edit properties in the inspector view while playing with the gamepad.

    When using Xinput .Net instead of the current system, the gamepad inputs are received no matter which window is focused. By wrapping it, I can simply choose if I want to update the inputs when the application is focused or not. It would be nice to be able to choose that is a setting. (like SetCooperativeLevel in Direct Input)

    Thanks!

    1. Rune Skovbo Johansen

      April 13, 2016 at 11:58 am

      There is probably no way to have this work sensibly for keyboard and mouse, but we’ll look into having it work for other devices like gamepads.

  28. I do have one question: will this mean that Unity ‘might’ be switching from using an array for connected controllers/gamepads? Reason I ask is, we found that to be a colossal issue with our student project last year (handling disconnection of controllers/gamepads), as they were stored as an array by the engine

    1. Rune Skovbo Johansen

      April 13, 2016 at 11:55 am

      In the new design, input devices are objects and you can query the state through these device objects (already in the current prototype), as well as get notifications about connected and disconnected devices (not in the prototype but planned for the input system).

  29. Do i still need to use update or i can just subscribe to input events?

    1. Rune Skovbo Johansen

      April 13, 2016 at 11:51 am

      Subscribing to input events can mean many things. There’s the raw events relating to different input devices. You can already subscribe to those in the prototype, but it doesn’t tell you anything about which player or action it corresponds to.

      Other things it could mean:
      – Events that relate to a specific player (but still in the form of actions relating to a device, e.g. a key press or axis move etc.)
      – Events for actual changes in action state. This is abstracted away from input devices and instead tell you about a change in action for a specific ActionMap for a specific player. We’re planning to add something like this.

      What in particular do you have in mind?

      1. Subscribing to events like in:
        GetComponent().OnTouched+= delegate;

  30. Can ‘PlayerHandle’ be something a bit more clear? Is there an actual ‘Player’ class, if not why not just call it ‘Player’. Or to make it more clear, something like ‘PlayerInputBinding’?

    I’m also a bit confused by (~16:28) in the video where it seems to show that a single GameObject would have the Player Input Script component as well as the Movement script for *both* a Vehicle and Player (Biped I assume). Is the idea that this example is supposed to be something that can be both a Biped and Vehicle or is it just an invisible “Player” GO that references another entity based on what it is trying to control at that time? i.e. a biped that enters and controls a vehicle.

    1. Rune Skovbo Johansen

      April 13, 2016 at 11:43 am

      The way things are explained in the video are a bit abstract, The two input-handling scripts could be on the same GameObject or on different GameObjects. What matters is that they both reference the same PlayerInput component.

  31. As long as it works like the Rewired plugin, it’l be a great start. I wonder how often (if ever) Unity collaborates with plugin makers to turn them into built-in features.

    I feel kinda silly saving up all month for a plugin for input just for this to pop up a day later.

  32. More emphasis on touch and gesture input please, it’s a pain right now.

  33. What about Windows touch support? It’s a frequently requested feature. Input.GetTouch only works on mobile plataforms…

    1. I agree that this should be built into the Unity input system, particularly with the Win8/10 focus on touch screens. In the meantime, check out the plugin GenTouch, it solved the problem for us.

  34. When will Unity start spell/grammar checking these blog posts before posting?

    1. Rune Skovbo Johansen

      April 12, 2016 at 10:51 pm

      If you find anything that slipped past the people who already read through it, feel free to let us know. :)

    2. Only when they foolishly begin to think such pedantry holds great value. Stay warm and human Team Unity!

    3. Michael E Chugg

      April 13, 2016 at 1:04 am

      He may be trying to point out that the “already now” in “we want to get you involved already now.” doesn’t flow grammar wise. But hey my grammar is really bad and this is a tech blog. My expectations on this are low for grammar in these parts. It’s not like Unity is a book publishing company. XD

  35. What if you want to mix Gamepad AND Mouse?
    Using a nunchuk or Playstation Navigation Controller and a mouse toghether is a pretty good control scheme

    1. Rune Skovbo Johansen

      April 12, 2016 at 10:47 pm

      You can create control schemes with whatever combination of input devices you want.

  36. Feels similar to a lightweight version of Rewired or InControl. Very glad to see you guys are responding to that very large need as the current system is a bit of a mess.

  37. matthew radford

    April 12, 2016 at 9:08 pm

    great job guys! i coded something similar for a past project, really wish this had existed then. seems so much more thought out than mine. thanks for responding to your users!! we love unity!

  38. Looking promising!

    Question: Is there a way to handle multiple mice/trackballs? This has been a beast to tackle in Unity up to this point since Windows treats all plugged these as the same device. I had to utilize external dlls to distinguish between these in my Unity application.

    Thanks for the information and the update!

    1. Yes and no :)

      The system itself has no restriction on the types and number of devices hooked into it. 5 keyboards or 10 mice, it doesn’t care.

      However, from what I understand, you care most about the platform actually detecting that there’s more than one pointing device and properly registering those as multiple instances. Unfortunately, as you say, Windows pointer message can come from different devices yet will look as just “the” pointer to the application. And pointer message we pick up in a way where the origin isn’t even evident.

      So, all I can promise at this point is that we’ll take a look and see. Even then it would probably be something that is supported on a per-platform basis only.

      1. Thanks for the reply!

        I can point you to a place where someone seems to have nailed this implementation (from what I can gather and from my non-low-level perspective). https://alastaira.wordpress.com/2015/08/04/multiple-mice-input-in-unity/#comment-5021 . It works really well and that comment has the source code for the dll implementation for it and the solution has Mac Windows and Linux projects in it.

        This is all rather timely as I just made a client application for a large aquarium here in the states and we’re having an issue with one of the trackballs getting disconnected for some reason. I was using a different implementation, but I switched to this one today and this one seems to have the ability to re-init itself and re-acquire the lost device. I did a quick test where I pulled a mouse out of my computer and replugged it in and manually triggered the re-init and the input kept working as if i had never unplugged it.

        Keep up the good work!

        1. Anomalous Underdog

          April 13, 2016 at 3:27 am

          Funny you mentioned that, that’s what I used for my Global Game Jam 2016 game, and it worked well enough. http://globalgamejam.org/2016/games/morning-ritual-simulator-2016

      2. Gareth Lockett

        April 17, 2016 at 5:45 am

        It would be really useful to be able to get some form of unique hardware id for each input device. That way we could allow players to create profiles for specific devices within our games :)

      3. Big help, big help. And sualerptive news of course.

  39. Is there a way to subscribe to input Event instead of checking for input on Update? Also, is this new system running on the main loop? Both practices are really bad. If your game rendering slowdown your input will also be waiting for the rendering to be finished.

    1. Is there a way to subscribe to input Event instead of checking for input on Update?

      In the prototype you can hook yourself into the event tree (basically a tree of subscribers) and then you’ll get callbacks. The details of event distribution are still a bit up in the air as we’re trying to make something happens that works in a bit of a wider context than just input but I don’t think the fact that you can get notifications is going to change.

      Also, is this new system running on the main loop?

      Event processing, yes. Event gathering, not necessarily. We completely agree that frame-rate dependence in input is bad. And the old system was inherently tied to frame rate.

      What we want is to have event gathering (e.g. when we poll gamepads) to happen off the main thread where possible and where it makes sense. Where we already have properly timestamped events we can pick up from the OS instead of having to poll, that doesn’t make sense but where we can’t it definitely does. Doing it off the main thread will allow it to be run at higher frequency than frame rate and pick up events from polled devices with better granularity.

      1. Too bad you couldn’t answer my question from APRIL 12, 2016 AT 8:03 PM in this thread.

        But you mentioned later:
        What we want is to have event gathering (e.g. when we poll gamepads) to happen off the main thread where possible and where it makes sense. Where we already have properly timestamped events we can pick up from the OS instead of having to poll, that doesn’t make sense but where we can’t it definitely does. Doing it off the main thread will allow it to be run at higher frequency than frame rate and pick up events from polled devices with better granularity.

        DENNIS APRIL 12, 2016 AT 8:03 PM / REPLY
        Is it possible to run the input event system out of the rendering thread in mobile? This way we can set the target framerate to1 FPS to preserve power and set it back to 30 once the screen is tapped or a button is pressed. And unity will be even more awesome for developing mobile apps :)

        Does this means it will work for touch input on mobile too ?

  40. I hope this new input system will allow input to be emitted/simulated. For example we can grammatically generate key/button/action events. This is very useful for creating a playback system or for creating a soak testing system that can generate fake user input to test apps. Thanks,

    1. I hope this new input system will allow input to be emitted/simulated.

      Absolutely.


      InputSystem.QueueEvent(myEvent);

      You can blast them to disk on one machine and then feed them into the input system on another machine. Or you can locally make up completely artificial events however you like.

  41. I’d just point out that your input system is probably not an input system. It’s probably an input and output system. Haptics such as rumble packs should work the same way. A VR headset should work the same way. The headset’s position and orientation tracking is input. Its display is output. So are its speakers. And all of that maps to a player handle for the exact same reasons you explain with regard to input devices. Consider the idea of two people using two VR headsets on the same computer, and you realize that it’s all unified. From gamepads to displays to gyroscopes.

    1. Rune Skovbo Johansen

      April 12, 2016 at 10:42 pm

      Yep, you’re completely right. Most of what we’re designing is related to input, but it goes broader than that.

  42. Robert Cummings

    April 12, 2016 at 8:29 pm

    Couple of quick comments – not tried yet.

    1. I’m a big fan of using a unified naming scheme, so in our game I just use xbox controller names but then behind the scenes map to PS4 and other devices via Incontrol – wondered if similar simple approach was possible.

    2. What is the performance like? InControl saps a millisec, and I only have 16 of them ;)

    Great work from what I can see so far, just wondered if it might be in danger of being over-engineered.

    1. Rune Skovbo Johansen

      April 12, 2016 at 10:22 pm

      I don’t know exactly what you mean by unified naming scheme, but we have a concept of device standardization that you can read about in the design overview. E.g. you can map to controls on a generalized “gamepad” and it just works against all brands and models of gamepads that there are device profiles for. We don’t have anything to say about performance yet.

  43. Is it possible to run the input event system out of the rendering thread in mobile? This way we can set the target framerate to1 FPS to preserve power and set it back to 30 once the screen is tapped or a button is pressed. And unity will be even more awesome for developing mobile apps :)

  44. Just checking, will controls be re-configurable during runtime, this time around? So I guess I’m looking for… the ability to define new action maps during runtime, save their properties using serialization, and assign them to character controllers? I feel this is important, in case a player has a weird controller that the developer hasn’t made a map for, but really wants it to work with the game. Having in-game configurable controls is much easier to set up with players than mysterious axises they define before starting up a game, but I understand that it’s a good option for some games. It’d be nice to have the choice, I really strive for accessibility, here!

    Also, is it possible for gamepad trigger events to be more unified? On Mac & PC, the same gamepad’s triggers will return a range of 0 to 1 on PC, and -1 to 1 on Mac with the axis initializing at 0 instead of -1.

    This is really niche, but the ability to define a direction on an axis to respond like a button when pushed past a definable threshold would be very convenient, too. Sometimes you want a hatswitch to respond like 4 separate buttons instead of an axis. The reverse would also be nice, treating two buttons/keys as an axis. A good example of this would be the controls for “Me & My Katamari” for PSP, where Katamari Damacy’s traditional joystick controls are swapped for the PSP’s d-pad & 4 buttons.

    Also what would be very nice: Axises implemented in the same method as “GetKeyDown”. If a developer wants to make their own input system, or make a very quick jam game, I feel they should be able to without setting up a bunch of axises in the Input system. Why should I be able to use “GetKey(KeyCode.Joystick8Button15)”, but not something like “GetAxis(AxisCode.Joystick1Axis0)”? You could also potentially add “GetAxis(KeyCode.LeftArrow, KeyCode.RightArrow)” to instantly create a “virtual” axis on the fly.

    Only making so many requests because I was able to make my own lay-over script that did all these things, back in November. It’d be nice to have these be a part of Unity, even though I’d get less on the asset store. ;)

    1. Rune Skovbo Johansen

      April 12, 2016 at 8:00 pm

      Yep, you can reconfigure input at runtime! For answers to more of your questions, have a look at the design overview.

      1. Really sorry, I’m reading the design overview again and again and I can’t seem to find anything on the subjects other than reconfiguring input, so I thought I’d ask here? I can’t even seem to find basic documentation, just a glossary of terms. For example, it looks like “firstPersonControls.fire.isHeld” is the new “Input.GetButton(“fire”)”, but there’s nothing on what’s replacing GetButtonUp, GetButtonDown, etc.

        I just feel that having an axis equivalent of “GetKeyDown” would be very important. If people just want to make a very simple game or handle input themselves, it’d be a very nifty tool compared to the old way of setting up countless axises by hand.

        1. Rune Skovbo Johansen

          April 12, 2016 at 10:13 pm

          The section on Device Standardization has one answer at least. :) Most of the other things you mention are also covered in the new system.

          It’s true we have no API docs. It’s still very early days. But if you download the project and use Visual Studio, MonoDevelop or something else with intellisense, you can get a sense of the API that way. (You can also see the full source if you want.) We do have equivalents to GetButtonDown etc.

          In general though, we don’t really need requests if you didn’t try out the system. Getting tons of requests is a step we did around a year ago and then we built the system we have so far based on that. By now we don’t need more requests; we need people to actually try out the system and tell us their experience based on that. :)

  45. why reinvent the wheel when theres already way better input systems than your current one?(also open source)

    1. Rune Skovbo Johansen

      April 12, 2016 at 7:55 pm

      Because it’s highly requested by many of our users! You’re of course welcome to use whichever system you like best. :)

  46. Finally! Thanks!

    I was about to create one Input Manager myself, this will save a lot of time from testing.

    But, will the assigned keys/buttons will be able to be changed on runtime? I don’t need to create new actions, but will be really helpful to be able to access the actions and change the keys/buttons during runtime, as right now, the only way to change the assigned trigger of an action (Using Unity Input), you need to quit the game to display the start window again.

    1. Rune Skovbo Johansen

      April 12, 2016 at 7:44 pm

      Yes, you can change assigned keys and buttons and runtime. Also, please check the FAQ: https://sites.google.com/a/unity3d.com/unity-input-advisory-board/faq

      Note though that there’s still a long way to go before the system is released. Depending on your needs, it might still make sense to write your own system. Give the prototype a try and see if it covers your needs and then you can make an informed decision.

  47. Nice :)

    It could be very handy to add the possibility to attach “metadatas” to action map.
    Like a sprite for instance that shows the button to press (key for keyboard, button for gamepad) ,
    or a text (“Press A to start”, “Press Space to Start”)

    1. Rune Skovbo Johansen

      April 12, 2016 at 7:33 pm

      I don’t think the ActionMap is the right place for this data. For one, if multiple ActionMaps use the same key/button/axis, it would be annoying and error prone having to assign the icons separately every place. For another, in the ActionMap you can use generalized devices such as “Gamepad” but this is an abstraction over specific devices such as “Xbox 360 controller” or “Playstation 4 controller”. And the same generic gamepad button might be called “A” on one and “Triangle” on the other and thus have different icons.

      What we do in the current design is that the names of controls come from something we call device profiles. You can easily extract these names in order to get the name of the button that actually needs to be pressed.

      We did include icon support directly in this system since we want to be able to include a lot of device profiles included with Unity, and letting the user supply customized icons for built-in device profiles became a bit of a mess. But we want to supply a way where you can use the control name as a look-up key and get icons etc. back. The end result is what you talk about (and what is also talked about in the video linked in the blog post).

      1. Fair point :)

  48. Would I be able to use this system in the Unity 5.4 beta as well? Or just in Unity 5.3?

    1. Rune Skovbo Johansen

      April 12, 2016 at 7:25 pm

      I haven’t tested in Unity 5.4 but it should work. Let us know if you encounter problems.

  49. In Multi-Player… one player using “WASD” and the other the arrow keys of the same keyboard will be supported, right?

    Also, for our controller-based games we found it never be enough to have some declarative way of specifying input events. “Button X down” is nice and fun, but in real life, you end up with something like “Button X hold for 2 seconds and then tap the A for a short time”. So please: add a way of “emitting (virtual) input events” or something like that where we can actually write code that decides when and whether some events are generated. (IMHO, not providing this feature will result in the need to layer the whole input system for any bigger game – which we have now already)

    1. Rune Skovbo Johansen

      April 12, 2016 at 7:22 pm

      Two players using different parts of the keyboard – this will be possible but may be a bit less straightforward than the more common use cases where each device is assigned to one player only.

      It’s fully possible to write code that emits events, but I’m not sure it covers your use case in an ideal way. Would you mind writing a bit more about the setup you have in mind in the New Input System forum? For example, if the game can be controlled with either keyboard+mouse or with a gamepad, I’m curious if you want to emit events for a specific device (keyboard, mouse, or gamepad) or at some higher level that would work regardless of which control scheme is currently used.

      1. Thanky Thanky for all this good intfomaoirn!

  50. So basically this is InControl+. I’m not even mad because InControl is incredibly well designed.

  51. I’m glad to see this is being worked on! I’ve already opened up the current available version and I like how you guys have separated out the input mappings as assets in the same fashion as mecanim state machines are assets.

  52. Hope support android/iOS bluetooth gamepad vibration.
    XInputDotNet only support Xbox 360 controlle for android.

    Some android bluetooth gamepad JoystickButton inconsistent. Need dynamic customize keycode(at game)

  53. Wendelin REICH

    April 12, 2016 at 4:47 pm

    Cool! However, you don’t mention touch-based input here or on the site. I assume it is planned, but what I’d like to know is how you intend to support it:

    Unlike other input methods, the challenge of touch-based input isn’t so much about abstracting hardware, but about semantics. When is a touch actually a swipe, when is a two-finger swipe a twist and/or pinch? What’s the meaning of a two-finger swipe if the game doesn’t make specific use of it?

    One asset that solves these questions relatively well is EasyTouch 4.x. Do you see the new input system as situated on the same level of abstraction as this asset (just like Unity UI was essentially an alternative of NGui), or is it just a replacement for the touch-related stuff under Input.* ?

    1. Rune Skovbo Johansen

      April 12, 2016 at 4:58 pm

      Hi Wendelin! We had a pass where we looked into touch and gestures. What we found was that gestures are very often contextual; e.g. they relate to specific objects or areas on the screen.

      Since the actions in the input system are non-contextual, we came to the conclusion that a gesture based system is something that can be built on top of the core input system rather than having to be built directly into it.

      I think a setup that would make sense is having a gesture controller as a virtual input device. The gesture controller recognizes gestures and sends higher level input events in response. In an action map you’d then be able to specify a touch control scheme where you can map gestures to actions.

      Does this make sense to you or did you have something else in mind?

      1. Hi,

        I too really hope that gestures will be in a way or other handle in the new Input system. Many gestures are often used by peoples in many different applications (swipe, touch etc.) – it’s not anymore something exotic but more like keyboard or mouse buttons. So it would be great to have some of those “invisible inputs” handle by default.

        1. Rune Skovbo Johansen

          April 12, 2016 at 7:49 pm

          It should be quite possible for us to support some generic full-screen gestures that do not rely on context, for example swipes and pinching.

      2. Please give us the opportunity to generate an input event from the script. Then we will be able to create my virtual input device that will work transparently with Unity input system. Thank you.

  54. please give me free download of unity web player

  55. Aubrey Hesselgren

    April 12, 2016 at 4:34 pm

    In this setup, would I be able to derive an AI Player Handle, to have my game code take control over a Player Avatar? (Seems likely, just wanted to check.)

    1. Rune Skovbo Johansen

      April 12, 2016 at 4:49 pm

      You can create your own custom virtual input device that gets its input from your AI code and then assign that device to the player handle of your AI avatar.

      Is this what you’re after, or do you mean triggering actions in a more direct way, cutting out the need for simulated input devices? We don’t currently have a design for the latter, but it’s something to think about.

  56. Aubrey Hesselgren

    April 12, 2016 at 4:19 pm

    So here’s a mad thing, and maybe a bridge too far for the input system. I have been using a dll based on Ryan C. Gordon’s ManyMouse code ( http://hg.icculus.org/icculus/manymouse) in order to separate the feeds from multiple mice.

    As far as I know, it’s working on windows, mac, and linux, at which point, I thought it might be of interest to Unity as something no other engine does out of the box.

    Not many people try multi mouse games, for various good reasons (shared desktop space sometimes has you spooning, and controlling with your off-hand is always a bit weird), but some weird and wonderful stuff can be made when you can separate out the mouse inputs. https://t.co/X9JevsEFNh

    Anyway, very happy to hear the new input system is coming along!

  57. Glad to see this is being worked on! I guess my biggest concern would be if Unity will be supporting ‘Hot plugging’. At the moment the Input system won’t pick up controllers that are pluged in at Runtime. Also if there is some way to map out all controllers to buttons (not just the mainstream controllers), that would be awesome :)

    1. Rune Skovbo Johansen

      April 12, 2016 at 4:19 pm

      It doesn’t work in the prototype currently available, but it’s definitely planned! See the Experimental New Input System site for more info, including the FAQ.