Search Unity

UPDATED Dec 12, 2017: We have made significant changes to our plans for the input system. Please read our forum post for details.

In Input Team we’ve been working on designing and implementing a new input system. We’ve made good progress, and though there’s still a long way to go, we want to get you involved already now.

We’ve build a new foundation for working with input in a new way that we’re excited to show you, and we want to continue the development with your input on the existing and continued design.

Development process

The new input system will consist of two parts. The low-level part is integrated into the C++ core of Unity. The high-level part is implemented in managed (C#) code that will be open-source in the same way as e.g. the UI system.

Our development process for the new input system is to design and implement large parts of the high level system first. Initially this is based on top of the current input system that already exists in Unity. For now, we call this the input system prototype. Later, once the new low-level core is more mature, we’ll change the high-level part to be based on the new low-level system.

This means that the current high-level system (the prototype) lacks specific features that depend on the new low-level core, such as robust registration of connected and disconnected input devices while the game is running. However, many features of the design can already be used and tested, and this is particularly what we want early feedback on.

Working with you

Here’s how we’d like you to get involved:

  1. Learn about the design of the new system
  2. Try out the new input system prototype for yourself
  3. Tell us about your experience

Learn about the design of the new system

Input that works well for a plethora of different use cases is a surprisingly tricky matter. We’ve prepared some resources for you to learn about how the new design attempts to address this.

First of all, we’ve created this video covering the design of action maps and player management. It’s a good introduction to the new design.

We’ve also prepared a site with more information about the design, including a Quick Start Guide.

Head to the Experimental New Input System site to learn more.

Try out the new input system prototype for yourself

We have a project folder which contains the input system prototype as well as a demo project which uses it. This can be used with regular Unity 5.3 without needing a special build.

Download it from the Experimental New Input System site.

The input system prototype can be tested with other projects by copying the folder Assets/input-prototype into the Assets folder of another project. (Please create a backup of your project first.)

Tell us about your experience

What do you think about the design? How does it work (or not) for your project? Anything that’s confusing or unclear? For now we’re interested in discussion around the design of the system. We don’t need bug reports quite yet at this stage in development.

Head to the New Input System Forum to discuss the new input system.

We’re looking forward to working with you!

111 replies on “Developing the new input system together with you”

It would be supercool if the new system would support 2 mices for local multiplayer like in The Settlers. So far I only found some hacks and have to try it in Unity but I could sleep much better with official support :) Thanks!

Just a note about the current version of the prototype. It doesn’t work nicely with hot reload.
Hope it will be fixed. Thanks.

Hi,
I have a problem, i am integrating game pad controls in my game, but it is not necessary that «Button A» has the value joystick button 0 in other different game pad. I have to integrate separately for different game pads.
Are you working on that problem too ?

I’d like to echo the comments here for some focus on touch input. Unity should have built-in gesture recognizers, preferably implemented in the same way as iOS:

[…] also decided to hold off on going further in my new input system until Unity’s new input system is a little farther along in development. Thanks to Deozaan for bringing that to my attention! […]

It would be awesome if this would not only cover the input, but also special generic features controllers have like rumble for example.

if( Player.Playerinput.hasController )
{
Player.Playerinput.rumble();
Player.Playerinput.Playstation4ControllerColor = Color.Red;

//or invert controller axis during runtime
Player.Playerinput.Playerinput.InvertAxis(Left.Stick); <- this may be possible already in the Prototype I don't know^^
}

It’s great that you guys are working on a better Input system, I’m loving this.

I just wanted to bring some awareness to a feature that I incorporate frequently in my Unity projects and would love to see it supported natively by your Input system. I’m very interested in allowing the user to map different Modifier states to different actions. Here is an example of some action mappings:
– CastSpell1 = Q
– CastPetSpell1 = Shift-Q
– QuestLog = Control-Q
– AutoRun = Shift

As you can see, there is an overlap of keys and modifiers, but they are definitely discreet. Not all games in the market right now allows the user to map their keys in this manner, but some do (one example is Tom Clancy’s The Division).

This might be beneficial to controllers too, for example: if a user would like to use the Xbox One Elite Controller Paddles as a shift state for other buttons (A vs Paddle+A).

We need more buttons support by joystick.
Actually there is a built-in limit in the Input Manager set to 19 buttons.

[…] Source: Developing the new input system together with you – Unity Blog […]

Howdy guys congrats on improving unity and you’re continued progress. A few questions first will the older input system still be usable also projects that are older or were created using the old input system will they have to be updated to keep working?

While the details are still to be figured out, what is certain is that there has to be support for the old API or we’ll break just about every Unity project out there (plus invalidate tons of tutorials and articles). Whether this support is to be provided by just keeping the old API alive, by making it be a shim on top of the new system, by having the script updater rewrite API usages automatically, or by other means… that part isn’t clear yet. There are, however, good arguments that can be made in favor of just keeping the existing API going. At least for some time.

Hello, thanks for good news;
besides talking about great new architecture, multiplayer abilities etc. – will it please *finally* support the simple, ordinary touch input on Windows – the same way as it does on Android etc.?

In fact I did not believe it’s missing when I firs tried to use it, read the forums and found various third party solutions tackling this deficiency…

But of course thanks for the development effort anyway… :-)

This might be an unpopular opinion, but I’m not very fond of built-in complex high level systems. As something you can add to a project it would be awesome though. But as long as I’m able to make them from available information I’m satisfied, and I do miss having the information of «this input came from this device» so I can do custom tutorial messages, for example, so this new system already has me happier just for that. But having this entire system out of the box makes me a bit nervous. It will surely help a lot of projects, but the most unusual projects will always see these built-in high level systems as another obstacle, another thing to remove before finding the blank canvas. I’m sure that if I want to make a game that has the player holding a gamepad with one hand and a mouse with the other it will still be possible to do, but I might need to jump through some more hoops to do that than with the current system.

I guess what I’m trying to say is, what attracted me to Unity in the first place was not the amount of features it had. If that were the case I would still be using UDK or Crytek. No, it was the blank canvas thing; how easy and flexible it is to build ANY game on it. In fact, even non games. And any new feature that brings a tiny amount of constraint and less flexibility is no good.

But I know I shouldn’t be nervous! Unity clearly still retains its initial philosophy and I’ll always have the option of creating my own crazy input systems however the hell I want from basic exposed information. Right?

After 7 years waiting, more then 2000 feedback votes, its good that something happen, but this 1man 1month work you expose is barely enough or simple if someone check public roadmap such important thing «In-progress, timelines long or uncertain”.
Although you said, Unity will,I feel so much reluctance about changes, need to be done in C++ side,
especially exposing low level to C# API, make input system developer as myself to solution to marshall OS C++ in C#.
For example to achieve capture of device connect and disconnect, you already, at least on high level, have covered in proposed API.
(btw you can simulate with Input.GetJoystickNames() until you don’t have 2 devices with same name)
Two main points.
1)As Customer I need to remap and/or set InputManager.asset settings like sensitivity or gravity, and SAVE!.
As far I can see proposed API can runtime remap but can’t save it. What you have in mind? PlayerPrefs?
2)As Developer I want to connect device, create profile and map to actions by MOVING, CLICKING(long,double…)or combine
Have current system have plans to support COMBOS and EDITOR MODEoffer device mapping?, instead of huge 10 screens popup to map keyboard key.

I like ActionMapInput
generator so you have intellisense in code, but it is boilerplate if you expect to use
if(ActionMapInput.isHeld) //do something, ActionMapInput.isHeldUnityEvent
so I can in editor subscribe handler to do something. UNITY EVENTS!
Device profiles defently shouldn’t be HARDCODED but ASSETS with just data not creating device instances (new GamePad())

It would be good as to have public class AnimationCurve : InputControl, which no matter actual input is discreet or analog will calculate some value according to curve.

You need to track also connected device “PORT”, so if user change the device port not need to reassign.
I hope the below code won’t be HardCoded so for example I won’t include (mouse in mobile dev target)
go.AddComponent();
go.AddComponent();
go.AddComponent();
go.AddComponent();
go.AddComponent();

I would be able to support any device ur C++ won’t support, but just adding go.AddComponent(); and the rest would work.
Put the code on GITHUB we can track changes and progress.
P.S Contact other other Input System developer not just «Patrick», like Guavaman….

First of all: it’s great to see that the Unity team is finally doing some work in this area – it was a major pain for years.

I watched the youtube video. Basically, all planned features seem nice and dandy. However, in certain technical aspects, the implementation (or, at least the explanation in the video) doesn’t go far enough. There are two major areas where I would like to see more initiative:

1) Control re-mapping. Especially in PC gaming, there’s always the obligatory options menu where the player can (usually freely) map keys to actions as he sees fit. For example, I might want to re-map movement from WASD to the arrow keys, in the game, at runtime. That was almost impossible to do with the old input system without considerable effort. This, of course, also involves checking for potential collisions, and saving the settings for the next game session (i.e. this is not just an in-memory thing, it has to be persisted). How does the new system handle this?

2) On the coding side of things, I’ve found «if-then-else» cascades in the update function of a script that do nothing else than checking the current input state and calling functions to be tedious, hard to maintain and error prone. Instead, I would like to see an annotation-based system like this:

[BindToInput(actionName=»jump»)] // call this method when the game object is active and jump input occurs
public void jump(){
// … player jump code goes here
}

This completely eliminates to check for input in the update function, as it is a «push»-based system, whereas currently, input is handled in a «pull» fashion.
This can suit also more complex scenarios:

[BindToInput(actionName=»fire»)]
[KeyModifier(actionName=»alternative»)] // do not call this method unless the «alternative» fire key was held when the fire key was pressed
[HoldForMillis(2000)] // do not call this method unless the fire button was held for 2 seconds
public void fireMissile(){
//… code for saving the game
}

Nope, won’t happen. Backwards compatibility is a must have. The details of how that will work are still unclear but what’s certain is that even at the point where the new system is the goto solution, you’ll be able to load your project using the old API into Unity and it’ll work just fine.

I’m just in the beginning phase of learning Unity and the first thing I noticed was the weird way of handling input. I was having a hard time trying to accept it, but then as the weeks went by, this blog post made me smile again.

Keep up the good work.

I was wondering… Is it possible that the design philosophy behind this new input system is part of the bigger plan to change the way scripting works in Unity (which was very briefly mentioned at Unite Boston)? I’m referring to the fact that input is now handled by an actual input component instead of just being an «if()» check in Monobehaviours.

Instead of having every single thing jammed into Monobehaviour, will we maybe see collisions being handled by events generated by Collider components, and rendering events (OnPostRender(), etc…) being handled by events generated by Renderer components, etc, etc…? I think this approach would be wonderful.

While not directly related to the work on the input system, something along the lines of what you describe is being investigated as part of looking at components in a wider perspective.

First off, thank-you for the hardwork. Please ignore the trolls that don’t understand all the work that’s going on.

You commented to Robert Cummings that you have a form of design mapping. Looking at the Design Overview site, it looks like you’re doing something similar to Gallant Games’ «InControl». If you haven’t taken a look at what Patrick is doing, I’d highly recommend that you at least take a look. It’s been a great plugin.

http://www.gallantgames.com/pages/incontrol-introduction

Bug Report : When i enabled Auto graphic API in player setting Unity main fog (windows>lighting>fog) is working and global fog(optimized) don’t work !?
and vice versa !
they work in unity editor but in android devices no !?

Are you deaf or what Unity?! even though mobile devices are the primary sector in your business you always put Touch and other mobile features on the backseat.
SICK OF THAT!!

Please stop and hire a guy who did «Rewired» to help, otherwise I feel like users will end up writing plugins on top of new input system just like they did with the old one.

Completely unneeded!
Please don’t change anything, we have to learn few hours daily to ketch all changes you do from fixes to new things.
Please work on easier physics and realistic fluids and cloth.

Can we play on two keyboards and mices (one keybord and mouse for one player and second for second player)?

Remember that in games the players might want to customize the keys they want to use (eg. a player don’t want to fire using the ‘spacebar’ key, then he goes to the config screen and changes to the ‘A’ key). I hope this new system supports these changes in realtime smoothly.

My Primary Platform is mobile devices , and crossplatform is working good , do i continue using that ,this will have have better control over crossplatfrom when it comes to touch controls.

[…] Developing The New Input System Together With You, by Rune Skovbo Johansen. […]

Once you get to fleshing out the new low-level core, can we please make sure to include raw mouse input? For most games this isn’t a big deal, but as part of a company running an FPS game, we get requests for this quite often, and haven’t got around yet to addressing it on our own yet.
I’m referring to the «mouse smoothing/acceleration» that Windows and other OSes may have, and to being able to bypass that without having to suggest to our players to manually disable that setting in the OS.

Glad to see this system finally being updated. Especially the part about runtime re-binding ;)

Seems promising.

With the current system, gamepad inputs are not received if the game view is not focused. This is fine for a release version of the game, but it’s not convenient at edition time when you need to debug. Triggering a breakpoint removes the focus of the game view and potentially change the state of what you want to debug. Or even just to edit properties in the inspector view while playing with the gamepad.

When using Xinput .Net instead of the current system, the gamepad inputs are received no matter which window is focused. By wrapping it, I can simply choose if I want to update the inputs when the application is focused or not. It would be nice to be able to choose that is a setting. (like SetCooperativeLevel in Direct Input)

Thanks!

I do have one question: will this mean that Unity ‘might’ be switching from using an array for connected controllers/gamepads? Reason I ask is, we found that to be a colossal issue with our student project last year (handling disconnection of controllers/gamepads), as they were stored as an array by the engine

Can ‘PlayerHandle’ be something a bit more clear? Is there an actual ‘Player’ class, if not why not just call it ‘Player’. Or to make it more clear, something like ‘PlayerInputBinding’?

I’m also a bit confused by (~16:28) in the video where it seems to show that a single GameObject would have the Player Input Script component as well as the Movement script for *both* a Vehicle and Player (Biped I assume). Is the idea that this example is supposed to be something that can be both a Biped and Vehicle or is it just an invisible «Player» GO that references another entity based on what it is trying to control at that time? i.e. a biped that enters and controls a vehicle.

As long as it works like the Rewired plugin, it’l be a great start. I wonder how often (if ever) Unity collaborates with plugin makers to turn them into built-in features.

I feel kinda silly saving up all month for a plugin for input just for this to pop up a day later.

What about Windows touch support? It’s a frequently requested feature. Input.GetTouch only works on mobile plataforms…

I agree that this should be built into the Unity input system, particularly with the Win8/10 focus on touch screens. In the meantime, check out the plugin GenTouch, it solved the problem for us.

Only when they foolishly begin to think such pedantry holds great value. Stay warm and human Team Unity!

He may be trying to point out that the «already now» in «we want to get you involved already now.» doesn’t flow grammar wise. But hey my grammar is really bad and this is a tech blog. My expectations on this are low for grammar in these parts. It’s not like Unity is a book publishing company. XD

What if you want to mix Gamepad AND Mouse?
Using a nunchuk or Playstation Navigation Controller and a mouse toghether is a pretty good control scheme

Feels similar to a lightweight version of Rewired or InControl. Very glad to see you guys are responding to that very large need as the current system is a bit of a mess.

great job guys! i coded something similar for a past project, really wish this had existed then. seems so much more thought out than mine. thanks for responding to your users!! we love unity!

Looking promising!

Question: Is there a way to handle multiple mice/trackballs? This has been a beast to tackle in Unity up to this point since Windows treats all plugged these as the same device. I had to utilize external dlls to distinguish between these in my Unity application.

Thanks for the information and the update!

Yes and no :)

The system itself has no restriction on the types and number of devices hooked into it. 5 keyboards or 10 mice, it doesn’t care.

However, from what I understand, you care most about the platform actually detecting that there’s more than one pointing device and properly registering those as multiple instances. Unfortunately, as you say, Windows pointer message can come from different devices yet will look as just «the» pointer to the application. And pointer message we pick up in a way where the origin isn’t even evident.

So, all I can promise at this point is that we’ll take a look and see. Even then it would probably be something that is supported on a per-platform basis only.

Thanks for the reply!

I can point you to a place where someone seems to have nailed this implementation (from what I can gather and from my non-low-level perspective). https://alastaira.wordpress.com/2015/08/04/multiple-mice-input-in-unity/#comment-5021 . It works really well and that comment has the source code for the dll implementation for it and the solution has Mac Windows and Linux projects in it.

This is all rather timely as I just made a client application for a large aquarium here in the states and we’re having an issue with one of the trackballs getting disconnected for some reason. I was using a different implementation, but I switched to this one today and this one seems to have the ability to re-init itself and re-acquire the lost device. I did a quick test where I pulled a mouse out of my computer and replugged it in and manually triggered the re-init and the input kept working as if i had never unplugged it.

Keep up the good work!

It would be really useful to be able to get some form of unique hardware id for each input device. That way we could allow players to create profiles for specific devices within our games :)

Is there a way to subscribe to input Event instead of checking for input on Update? Also, is this new system running on the main loop? Both practices are really bad. If your game rendering slowdown your input will also be waiting for the rendering to be finished.

Is there a way to subscribe to input Event instead of checking for input on Update?

In the prototype you can hook yourself into the event tree (basically a tree of subscribers) and then you’ll get callbacks. The details of event distribution are still a bit up in the air as we’re trying to make something happens that works in a bit of a wider context than just input but I don’t think the fact that you can get notifications is going to change.

Also, is this new system running on the main loop?

Event processing, yes. Event gathering, not necessarily. We completely agree that frame-rate dependence in input is bad. And the old system was inherently tied to frame rate.

What we want is to have event gathering (e.g. when we poll gamepads) to happen off the main thread where possible and where it makes sense. Where we already have properly timestamped events we can pick up from the OS instead of having to poll, that doesn’t make sense but where we can’t it definitely does. Doing it off the main thread will allow it to be run at higher frequency than frame rate and pick up events from polled devices with better granularity.

Too bad you couldn’t answer my question from APRIL 12, 2016 AT 8:03 PM in this thread.

But you mentioned later:
What we want is to have event gathering (e.g. when we poll gamepads) to happen off the main thread where possible and where it makes sense. Where we already have properly timestamped events we can pick up from the OS instead of having to poll, that doesn’t make sense but where we can’t it definitely does. Doing it off the main thread will allow it to be run at higher frequency than frame rate and pick up events from polled devices with better granularity.

DENNIS APRIL 12, 2016 AT 8:03 PM / REPLY
Is it possible to run the input event system out of the rendering thread in mobile? This way we can set the target framerate to1 FPS to preserve power and set it back to 30 once the screen is tapped or a button is pressed. And unity will be even more awesome for developing mobile apps :)

Does this means it will work for touch input on mobile too ?

I hope this new input system will allow input to be emitted/simulated. For example we can grammatically generate key/button/action events. This is very useful for creating a playback system or for creating a soak testing system that can generate fake user input to test apps. Thanks,

I hope this new input system will allow input to be emitted/simulated.

Absolutely.


InputSystem.QueueEvent(myEvent);

You can blast them to disk on one machine and then feed them into the input system on another machine. Or you can locally make up completely artificial events however you like.

I’d just point out that your input system is probably not an input system. It’s probably an input and output system. Haptics such as rumble packs should work the same way. A VR headset should work the same way. The headset’s position and orientation tracking is input. Its display is output. So are its speakers. And all of that maps to a player handle for the exact same reasons you explain with regard to input devices. Consider the idea of two people using two VR headsets on the same computer, and you realize that it’s all unified. From gamepads to displays to gyroscopes.

Couple of quick comments – not tried yet.

1. I’m a big fan of using a unified naming scheme, so in our game I just use xbox controller names but then behind the scenes map to PS4 and other devices via Incontrol – wondered if similar simple approach was possible.

2. What is the performance like? InControl saps a millisec, and I only have 16 of them ;)

Great work from what I can see so far, just wondered if it might be in danger of being over-engineered.

Is it possible to run the input event system out of the rendering thread in mobile? This way we can set the target framerate to1 FPS to preserve power and set it back to 30 once the screen is tapped or a button is pressed. And unity will be even more awesome for developing mobile apps :)

Just checking, will controls be re-configurable during runtime, this time around? So I guess I’m looking for… the ability to define new action maps during runtime, save their properties using serialization, and assign them to character controllers? I feel this is important, in case a player has a weird controller that the developer hasn’t made a map for, but really wants it to work with the game. Having in-game configurable controls is much easier to set up with players than mysterious axises they define before starting up a game, but I understand that it’s a good option for some games. It’d be nice to have the choice, I really strive for accessibility, here!

Also, is it possible for gamepad trigger events to be more unified? On Mac & PC, the same gamepad’s triggers will return a range of 0 to 1 on PC, and -1 to 1 on Mac with the axis initializing at 0 instead of -1.

This is really niche, but the ability to define a direction on an axis to respond like a button when pushed past a definable threshold would be very convenient, too. Sometimes you want a hatswitch to respond like 4 separate buttons instead of an axis. The reverse would also be nice, treating two buttons/keys as an axis. A good example of this would be the controls for «Me & My Katamari» for PSP, where Katamari Damacy’s traditional joystick controls are swapped for the PSP’s d-pad & 4 buttons.

Also what would be very nice: Axises implemented in the same method as «GetKeyDown». If a developer wants to make their own input system, or make a very quick jam game, I feel they should be able to without setting up a bunch of axises in the Input system. Why should I be able to use «GetKey(KeyCode.Joystick8Button15)», but not something like «GetAxis(AxisCode.Joystick1Axis0)»? You could also potentially add «GetAxis(KeyCode.LeftArrow, KeyCode.RightArrow)» to instantly create a «virtual» axis on the fly.

Only making so many requests because I was able to make my own lay-over script that did all these things, back in November. It’d be nice to have these be a part of Unity, even though I’d get less on the asset store. ;)

why reinvent the wheel when theres already way better input systems than your current one?(also open source)

Finally! Thanks!

I was about to create one Input Manager myself, this will save a lot of time from testing.

But, will the assigned keys/buttons will be able to be changed on runtime? I don’t need to create new actions, but will be really helpful to be able to access the actions and change the keys/buttons during runtime, as right now, the only way to change the assigned trigger of an action (Using Unity Input), you need to quit the game to display the start window again.

Nice :)

It could be very handy to add the possibility to attach «metadatas» to action map.
Like a sprite for instance that shows the button to press (key for keyboard, button for gamepad) ,
or a text («Press A to start», «Press Space to Start»)

In Multi-Player… one player using «WASD» and the other the arrow keys of the same keyboard will be supported, right?

Also, for our controller-based games we found it never be enough to have some declarative way of specifying input events. «Button X down» is nice and fun, but in real life, you end up with something like «Button X hold for 2 seconds and then tap the A for a short time». So please: add a way of «emitting (virtual) input events» or something like that where we can actually write code that decides when and whether some events are generated. (IMHO, not providing this feature will result in the need to layer the whole input system for any bigger game – which we have now already)

So basically this is InControl+. I’m not even mad because InControl is incredibly well designed.

I’m glad to see this is being worked on! I’ve already opened up the current available version and I like how you guys have separated out the input mappings as assets in the same fashion as mecanim state machines are assets.

Hope support android/iOS bluetooth gamepad vibration.
XInputDotNet only support Xbox 360 controlle for android.

Some android bluetooth gamepad JoystickButton inconsistent. Need dynamic customize keycode(at game)

Cool! However, you don’t mention touch-based input here or on the site. I assume it is planned, but what I’d like to know is how you intend to support it:

Unlike other input methods, the challenge of touch-based input isn’t so much about abstracting hardware, but about semantics. When is a touch actually a swipe, when is a two-finger swipe a twist and/or pinch? What’s the meaning of a two-finger swipe if the game doesn’t make specific use of it?

One asset that solves these questions relatively well is EasyTouch 4.x. Do you see the new input system as situated on the same level of abstraction as this asset (just like Unity UI was essentially an alternative of NGui), or is it just a replacement for the touch-related stuff under Input.* ?

In this setup, would I be able to derive an AI Player Handle, to have my game code take control over a Player Avatar? (Seems likely, just wanted to check.)

So here’s a mad thing, and maybe a bridge too far for the input system. I have been using a dll based on Ryan C. Gordon’s ManyMouse code ( http://hg.icculus.org/icculus/manymouse) in order to separate the feeds from multiple mice.

As far as I know, it’s working on windows, mac, and linux, at which point, I thought it might be of interest to Unity as something no other engine does out of the box.

Not many people try multi mouse games, for various good reasons (shared desktop space sometimes has you spooning, and controlling with your off-hand is always a bit weird), but some weird and wonderful stuff can be made when you can separate out the mouse inputs. https://t.co/X9JevsEFNh

Anyway, very happy to hear the new input system is coming along!

Glad to see this is being worked on! I guess my biggest concern would be if Unity will be supporting ‘Hot plugging’. At the moment the Input system won’t pick up controllers that are pluged in at Runtime. Also if there is some way to map out all controllers to buttons (not just the mainstream controllers), that would be awesome :)

Comments are closed.