Search Unity

The Input System is available in preview for Unity 2019.1 and later. This new system focuses on ease of use and consistency across devices and platforms. Install it via the Package Manager, give it a try, and join us on the forums with any feedback you have.

Unity’s current built-in input management system was designed before we supported the many platforms and devices that we do today. Over the years, we realized that it wasn’t very easy to use and, occasionally, it even struggled with simple situations – like plugging in a controller after the executable was launched. That’s why we have been working on something new; a complete rewrite. (P.S. There is currently no timeline for the removal of the current Input Manager.)

The Input System is built from the ground up with ease of use, consistency across platforms, and flexibility in mind. Today, we’d like to invite you to try it out and give us feedback ahead of its planned release alongside Unity 2020.1. The minimum requirement for using the system is and will remain Unity 2019.1.

New technology has dramatically changed how we play and consume media in recent years. Every new wave of technology also brings new devices along with specific control requirements.

Ease of use

The Input System’s new workflow is designed around a simple interface that works for all platforms, and can easily be extended to support custom or future devices.

The Action focused workflow is designed to separate the logical input your game code responds to from the physical actions the user takes. You can define Actions in the dedicated editor (or in a script) and bind them to both abstract and concrete inputs, such as the primary action of a Device or the left mouse button.

Action Maps for the Input System

Action Maps make it easy to manage many Actions across multiple Devices and Control Schemes.

Using the Input System’s PlayerInput component, you can easily link input actions to a GameObject and script action responses. This will work with any number of players in the game.

Player Input UI

Use PlayerInput to set up input for a player.

Get callbacks when actions are performed:

The new system supports an unlimited number and mix of devices. It also allows for notifications on device changes so you can properly support new devices at runtime.

Note that console development requires installing additional packages. These can be obtained on each dedicated console forum where you would traditionally retrieve the Unity installer. See this list of Supported Input Devices for more details.

Customization

The Input System is developed to be extensible, with APIs allowing you to support new Devices and to design your own Interactions, Input Processors, or even custom Binding setups. And in case you want to take a look under the hood, the package comes with complete source code and is developed on GitHub.

Some of the default Interactions. You can easily create your own.

Getting started with the Input System

Using Unity 2019.1 or later, open the Package Manager and enable Show Preview Packages in the Advanced menu. You will find the Input System package in the list of All Packages. Click Install in the upper right of the details panel. The current version is 1.0-preview. We’re working on getting the package verified for Unity 2020.1 and we’ll be adding new features afterwards.

You may see a popup warning letting you know that you need to enable the new Input System backend. Click yes and restart the editor and you’ll be ready to go.

Be sure to check out our Quick start guide, take a look at the samples that are installable through the Package Manager, and join us on the forums to give us your feedback. And in case you’d like to follow the active development, check out our GitHub repo.

84 replies on “Introducing the new Input System”

I’ve been having issues with reading values from an Axis or Vector2. It seems like my code only reads a value when the value changes, or when I move a joystick. If I move the joystick to a position (all the way forward, for example,) the script will read the value as I move it, but reads the value as zero when I keep it still. Am I doing something wrong or is the input system not working?

Im trying to build my project into an android device (samsung A7 2018) but whenever the build starts the input system wont work. It works fine on the editor my joystick works perfectly but it doesnt work when i try it on my phone. Is there any option to enable build settings when building into another device?

I set up my controller to work with this (and my camera) but I was not able to figure out how to use the player input component. Instead I had to create an input script that functions off the callbacks. This has resulted in a couple of issues I haven’t yet figured out.

1) I have two schemes setup (xbox and ps4 controllers) and in the editor they both work but when I build only the xbox360 controller works. How do I enable both in the build?

2) I feel that the playerinputcomponent provides alot of functionality without all the extra coding I did but I could not figure out how to use it. Could you release a video that shows how to plug it into a simple game? You could just plug it into a third person controller (standard assets) and show how to use the schemes.

I set up my controller to work with this (and my camera) but I was not able to figure out how to use the player input component. Instead I had to create an input script that functions off the callbacks. This has resulted in a couple of issues I haven’t yet figured out.

1) I have two schemes setup (xbox and ps4 controllers) and in the editor they both work but when I build only the xbox360 controller works. How do I enable both in the build?

2) I feel that the playerinputcomponent provides alot of functionality without all the extra coding I did but I could not figure out how to use it. Could you release a video that shows how to plug it into a simple game? You could just plug it into a third person controller (standard assets) and show how to use the schemes.

I was really skeptical when started watching the Unite talk, but I was really impressed by the editors and the way you guys pulled that one out. I will most likely play around with the Preview package when the gestures come out.. Biggest fear is that I’d still need to wrap that input system one extra layer in order to be able to use it properly in my projects..

The most I can say is that I can’t wait to see more about it, and will definitively want gestures, macros and action sequences to be part of the system. Looking forward to it, great work!

This seems great! Though I don’t have the chance to try it out when using vJoy to handle input from my gamecube controllers using my gamecube controller adapter for Wii U. I get an error saying “Could not create a device for ‘Shaul Eizikovich vJoy – Virtual Joystick (HID)’ (exception: System.NotImplementedException: The method or operation is not implemented.” etc, though the input from my Gamecube controllers seems to work just fine through Rewired and Unity’s old input system. Will the new input system support vJoy in the future? If not, could you recommend any other drivers to handle input from my Gamecube controllers?

The split screen input stuff looks great! Has the eventsystem been updated in the Menu system to allow multiple players? That is really needed to allow player-specific menus (like Rocket League for example). Thanks!

So currently setting “Active Input Handling” to “Input System Package (New)” and then installing the HDRP package throws exceptions in a couple of places in HDRP (DebugUpdater & DebugManager) as it is using the old UI system (despite its Debug setting being disabled).
Also, creating a UI Canvas (which creates an EventSystem) will throw similar exceptions as it attempts to poll the mouse using the old system. Is there any documentation on how to properly set up a UI with the new input system?

Enjoying it so far! Is there a tutorial and/or documentation for the Player Input Manager? I’ve got it working to pick up both controller and keyboard and it’s spawning multiple characters however, the mouse button actually spawns a separate character as opposed to only one. Is that because the keyboard and mouse is two different devices and thus it thinks it’s two players? How can I get around that?

The editor to click together input actions is genius and I’m happy with what you’ve created for the most part.

One thing that holds back the usefulness of the new Input System is the missing functionality to poll for “GetButtonDown” and “GetButtonUp”.

I’m aware this can be achieved by subscribing to the actionTriggered events to implement your own GetButtonDown/GetButtonUp code.

However, this is such basic functionality that I expect from an input system, that I believe it should be part of the Input System itself, just like it was in the old system.

Very happy to see that different devices inputs are standardized. And it’s available via package manager so not dependent on Unity updates.
Not so happy about the bloat on top of input. Mapping, Assigning input to controller, split screen? there’s just too much of assuming aka code on top of input.
For my game building experience I want to access basics (input events, standardized controller mapping) and then I want to build my action mapping and how game reacts on top of that.
This is way too much. I don’t want assumptions on which controller is available, and which to connect next automatically. There is so much Here I don’t want to use.
It feels very messy and I will have to jump hoops just to use basic input.
Could you perhaps separate it into more packages? Like ‘basic input package’ and then other like ‘local multiplayer split screen Input’…
I do not wish to rain on your parade, just my honest feedback. I’ll stay with old input + inControl asset (to standardize controller input) as it is.
Also, I may be completely wrong on all of this – since I haven’t downloaded and tried it for myself, just based on this blogpost and keynote video. But not inclined to download either after seeing this.

This really does seem like a waste of resources. Everyone uses Rewired anyway. These resources would have been better spent adding mult-threading support.

This is great news.

I still haven’t tried this new package so I have a rather silly question: does the new Input System provide some sort of real-time solution for remapping keys/buttons on the fly (say, in a Settings menu of a game) and then permanently save those changes? This has always been one of the trickiest things to achieve and I’d like to know if this has been worked for the new approach.

i.e. a key/button polling system for real-time usage (I saw in the Unite video above that the new touch input system comes with it, but I’m unsure about regular keyboard/controllers).

Nice work, it’s shaping up real good.
I love the callback setup.
Keyboard.current.aKey.wasPressedThisFrame sounds a bit too long though and I think it also misses some hits.

I hope that more than 20 buttons are supported. The Logitec Driving Wheel (G29) has some more buttons… so we could not use the current input system of Unity. Instead we had to use Rewired.

Is ForceFeedback supported by the new input system?

OK this is kind of an advanced system – I mean generating C# files?!
Will there be a lightweight version where you can just handle keyboard, mouse/touch, generic controller input or do you use the old system for that? Because I think you need a lightweight, simple to use level above this for quick get up and go where you do not need to add lots of inputs and generate…

Does this system pass console cert out if the box? What kind of support does it have to binding to console profiles etc. That has been by far the hardest input issue I’ve had to deal with.

Was excited to check this out, hoping it would be an easier way to set up input actions. Simple Demo package scene “SimpleDemo_UsingPlayerInput” does not work correctly. Action methods are not selectable from the SimplyController_UsingPlayerInput.cs script, additionally they are labelled as missing yet are still called in Play mode.

This is odd given that some responses to comments imply that this method should be functioning correctly when it is not.

I am not a negative person. So the feedback I am about to give you needs to have some weight.

What I like: I like the editor windows. They are better than the current input system.

What I don’t like:

It has taken an entire team 2 years….to get this far. I am blown away how terrible this is. The point of technology is to eliminate work. You somehow have created a system that adds to my work load if I used this system. You have nearly 20 steps from start to finish implementing and to use 1 action. You need to Generate c# files?!?! The generated output is unreadable trash. It isn’t even intuitive. You require me to set up c# events every time I want to use this now? You need to fire your architect and replace them.

Im guessing you have a team of what 5-10 people? – 2 years of budget. That adds up to 2 million dollars of budget for this project just paying salaries.

This is what 2 million dollars looks like…Try again. This is infuriating. Clearly the state of unity is going downhill quickly. I think it is time for a big re-organization shake up.

“The point of technology is to eliminate work.”
– It adds work in case of single platform, single controller type games.
– It adds initial work on more complicated scenarios as well, but in the long run it makes managing multiple platforms and multiple controllers manageable. Just look at ‘Rewired’ in the Asset Store. It’s pretty much the same thing. And look at it’s price and feedback. There are obviously people who need it and can’t imagine using the basic input system that was a single option in Unity until recently.

“You need to Generate c# files?!?! The generated output is unreadable trash. It isn’t even intuitive. You require me to set up c# events every time I want to use this now? You need to fire your architect and replace them.”
– You can easily achieve strong typed input polling that way, eliminating human error as a result. That is simply a good practice!

“Im guessing you have a team of what 5-10 people? – 2 years of budget. That adds up to 2 million dollars of budget for this project just paying salaries.”
– You have absolutely no idea how many people are working on this. Don’t assume. It can be a single engineer and two junior devs…

“This is what 2 million dollars looks like…Try again. This is infuriating. Clearly the state of unity is going downhill quickly. I think it is time for a big re-organization shake up.”
– No… they did a stellar job so far. Took them long enough for sure, but they are almost there. As a long term user of Rewired, I’m glad the system is pretty similar = pretty darn great.
You simply have no idea what you are talking about. You are most likely making a simple game (from the input perspective). You would be jumping with happiness otherwise.
People who are working on this are probably reading this comment section. So please refrain from insulting them. Specially when you are so ignorant as to give feedback on something you don’t even understand.

“– No… they did a stellar job so far. Took them long enough for sure, but they are almost there. As a long term user of Rewired, I’m glad the system is pretty similar = pretty darn great.”

This is often referred to as “Skating to where the puck is”.

Sadly is true, looks like if Unity get more money the development is more slow, in last versions the Cloth physics are a disaster and in the download section appear the 2019.2.4 as the last version when the last version is 2019.2.9

Awesome. Been using the new input for a while. It replaces asset store solutions because asset store solutions have poor XR support, don’t work natively with dots and they don’t work at Editor time for tools and more.

So yeah, it’s a great solution. I understand the new XR stuff will be merged going forward?

In any case – excellent work Unity. Thanks for making a better and native solution that works everywhere and not just standalone builds.

If you spent less time trolling like a child, perhaps you could contribute? In any case I’ve been using new input for a while and it doesn’t suffer the same problems as rewired does such as protracted assembly compile times, lack of in-editor support (it’s runtime only) and more. Rewired is great for your tiny scenario, but it’s not going to be able to handle XR/VR, DOTS, Editor tools and first party support before a device is even out (which Unity can very much do).

But again, if you had knowledge and experience you’d know that’s why they can’t buy rewired. But you don’t so you should probably just work on your game before you sound even more silly.

Having used this for the past 3-4 months, it’s great! I wish, though, that there were an easier story around polling. Our game requires a lot of fine-grained control over exactly how inputs are processed, and having events be the main focus makes this really tricky. I would love a similar api to the Actions api, but instead of callbacks, they could just be polled for a value! :D

Really nice! This blog post actually answered a lot of my questions like info about console support. Tried the tanks demo with the new input system and the only problem I had was that it didn’t work until restarted the editor; maybe you should force the player to restart.

Is it now easier to display the input button UI on the canvas? Like for example if the UI requests for jump button, it will check for a certain device and return the jump button icon of the depending on the device?

Hi! What about force feedback motors? Actually I only see some haptict functions with rumbe motors.
Any plans to add this feature? The only asset that support FF on steering wheels is the Logitech SDK and works only with Logitech Wheels and only on certain drivers version.
Other assets that claim to works with force feedback motors lie…

I noticed a mention of XR. . . . in the old system one had to have a custom Input Module to interact with UI. Basically the player had to raycast to a collider which represents the Canvas space and then translate the collision position to a location on the UI. It is very cumbersome, has this been addressed in the new system?

Sorry for the duplicates. . . server was slow. . . in the meantime I found the answer

It looks like it does !!! JOY!

https://docs.unity3d.com/Packages/com.unity.inputsystem@1.0/manual/UISupport.html

Tracked Device Position An Action delivering a 3d position of one or multiple spatial tracking devices, such as XR hand controllers. In combination with Tracked Device Orientation, this allows XR-style UI interactions by pointing at UI selectables in space.

Tracked Device Orientation An Action delivering a quaternion representing the rotation of one or multiple spatial tracking devices, such as XR hand controllers. In combination with Tracked Device Position, this allows XR-style UI interactions by pointing at UI selectables in space.

Tracked Device Select An Action used to submit “clicks” on UI selectables coming from a spatial tracking device. For example, you can map this to the trigger button on an XR controller to allows XR-style UI interactions in combination with Tracked Device Position and Tracked Device Orientation.

I noticed a mention of XR. . . . in the old system one had to have a custom Input Module to interact with UI. Basically the player had to raycast to a collider which represents the Canvas space and then translate the collision position to a location on the UI. It is very cumbersome, has this been addressed in the new system?

I noticed a mention of XR. . . . in the old system one had to have a custom Input Module to interact with UI. Basically the player had to raycast to a collider which represents the Canvas space and then translate the collision position to a location on the UI. It is very cumbersome, has this been addressed in the new system?

This remains spectacularly well thought out.

Do you think you could add invariant and variant input frequencies separate from fixed update and frame update? Examples of where this is needed:

1. Pens can do updates at 240fps and many are providing all sorts of facilities to use this for curve and motion prediction and smoothing.

2. Games and apps can now run much slower than 60fps in menus and other parts to save battery life, but touch response should not be slowed in all these cases, and running through a rapid fixed update cycle to detect touch will counter some of the sleep-ish gains possible from slowing the app/game.

This might require surfacing input as events rather than using a polling system. On systems where this is possible, this is a vastly superior way of managing input, in every single way.

Some devices surface inputs when they happen, meaning there’s an opportunity to interrupt and do something else regardless of position in the current game loop. Like forcibly come out of a slower rendering state. Further, a great need for this exists in music games/instruments, wherein many people have touch-sound latency interpretation much higher than their touch-visual acuity.

Does this new Input System also covers multiple controls for different Players?
Let’s take the Xbox as an example:

You got 1-4 Players playing locally (couch-coop). Does the Input System allow me to Control which character will move with which Controller? Or one Level higher: Can I Control which Player with which Controller can navigate through the UI?

In this Scenario a Player will be associated to a Controller and if one Player wants to configure “his Options” via UI, this is at the current state a very difficult use-case (I did not solved it at this time, but I guess a custom Input Module should be able of it … somehow), but will the new Input System covers These Scenarios also?

We are using Rewired at the moment. Is this a better solution? One feature we like with Rewired is the support for a lot of different input devices. Will this input system work with all those devices as well?

I used rewired up until this. This is as good as rewired, IMO its better as its built in and stops you relying on 3rd party asset.

Ugh, no, not really. The amount of work required with this is far higher than with Rewired. The C# generation alone is enough for me to avoid this until they do some serious UX work

Comments are closed.