Introducing Unity 2019.1
The first TECH Stream release of the year – Unity 2019.1 – is now available. It includes many new production-ready features such as the Burst Compiler, the Lightweight Render Pipeline (LWRP), and Shader Graph. Also, there are numerous innovations for animators, mobile developers, and graphics experts, and multiple updates that streamline project workflows and simplify Editor tasks.
In the next few weeks we will also release the 2018.4 Long-Term Support (LTS) version of Unity for those of you with projects about to ship and thus who wish to lock-in their production on a rock-solid foundation for an extended period.
Unity 2019.1 is packed with more than 283 new features and improvements. At the beginning of this post you will find a summary, followed by a detailed walk-through, of the major new features. If you’re eager to install and begin using Unity 2019.1, consider starting the download (click the button below or access via the Unity Hub) while you read this post.
Graphics and lighting
In Unity 2018.1, we introduced the Scriptable Render Pipeline (SRP) and Shader Graph in Preview. With Unity 2019.1, we have removed the Preview label and recommend LWRP and Shader Graph for production. Unity 2019.1 also brings artists additional functionality and platform support to the GPU Lightmapper (Preview) and a long list of improvements to the High-Definition Render Pipeline (HDRP, Preview) and Post-Processing Stack (Preview). The Heretic, a new short film by Unity’s award-winning Demo team, premiered at GDC. The demo is built on Unity 2019.1 and leverages Unity’s Scriptable Render Pipeline (SRP) architecture. Using the latest installment of Unity’s High-Definition Render Pipeline (HDRP) with its integrated Post-Processing Stack, the team achieved a cinematic look that closely emulates how physical cameras work – and renders in real-time.
We continue to expand our focus on artist tooling. In this release, we’re introducing runtime animation rigging, which gives you greater artistic control of your animations. We also have made improvements to our audio, video, DCC, and world-building tools. Finally, Timeline is now a validated package, and the new Timeline Signals feature offers an easy way for Timeline to interact with objects in the scene.
Mobile and other platforms
This release brings a number of improvements to mobile, including the ability to patch the app package instead of rebuilding it. This allows you to perform faster iterations during development. We’re also introducing Mobile Adaptive Performance (Preview), which provides you with data about thermal trends, including information on whether your game is CPU- or GPU-bound at runtime, and debugging and workflow improvements for mobile game development in general. Finally, the Unity Editor for Linux is now in Preview.
Performance and programmer tooling
We continue our progress building the high-performance multithreaded Data-Oriented Technology Stack (DOTS) with our Burst Compiler coming out of Preview in 2019.1. You will also find a range of other DOTS-related tools that made it possible to create the massive Megacity demo, which is now available for download here.
In just two months, our DOTS team and two artists from our FPS Sample group produced this futuristic cityscape.
We are also introducing a complete physics solution for DOTS-based projects in Unity developed in collaboration with Havok. We have made other improvements, including clickable stack trace links that take you to the source code line for any function calls listed in the stack, a text-based search tool to filter your console entries, and we’re introducing the new Incremental Garbage Collector as an experimental alternative to the existing Garbage Collector.
We love great performance at runtime, but high performance is equally important when you’re working in the Editor, so we’re continuing to focus on improving the workflow. With the Shortcut Manager, we’re introducing an interactive, visual interface and a set of APIs to make it easier for you to manage Editor hotkeys, assign them to different contexts, and visualize existing bindings in one interface. With the new SceneVis controls, you can now quickly hide and show objects in the Scene View, without changing the object’s in-game visibility. You can now also use UI Elements for extending the Editor.
2018.4 (LTS) available in the next few weeks
Unity releases comprise a TECH stream and a Long-Term Support stream (LTS). In 2019, we will have three TECH stream releases: 2019.1, 2019.2 (summer), and 2019.3 (late fall). 2019.1 is the start of the new TECH stream and provides access to the latest features.
The LTS release doesn’t have any new features, API changes or improvements. It’s simply a continuation of the 2018 TECH stream, with updates and fixes. That’s why we call it 2018.4, while this year’s TECH stream begins with 2019.1. The LTS stream is for users who wish to continue to develop and ship their games/content and stay on a stable version for an extended period. It addresses crashes, regressions, and issues that affect the wider community, such as Enterprise Support customer issues, console SDK/XDK issues, or any major changes that would prevent a large number of users from shipping their game. Each LTS stream will be supported for a period of two years.
The 2018-LTS is currently undergoing tests and is expected to be released in the next couple of weeks following 2019.1.0.
What’s new in Unity 2019.1
Give me all the details
The Mobile Notifications Preview package helps you implement retention mechanics and timer-based gameplay by adding support for scheduling local repeatable or one-time notifications on iOS (from iOS 10) and Android (4.1 and above). It’s available as a package, and you can find more information here.
The Preview version of Adaptive Performance is now available for Unity 2019.1.
One of the biggest challenges for mobile developers is building games that both look beautiful and play smoothly but don’t overtax the hardware, which leads to throttling (poor and inconsistent performance) and shortened battery life. Extending battery life/reduced thermals can allow gamers longer play time, which improves user retention and ultimately makes games more successful. Unlike on a PC or console, utilizing mobile hardware requires a delicate balance; fully utilizing a device’s capabilities quickly compromises performance.
To solve these problems, we partnered with Samsung – building on top of their GameSDK – to create Adaptive Performance, which is available for you to optimize projects for the Galaxy S10 and Galaxy Fold.
Read more about Adaptive Performance here.
The Hub now lets you install all the required components for Android as part of the Android Build Support option, so you’re sure to get the correct dependencies and don’t have to install anything else. If you’re an advanced Android user, you can still install and configure components manually and use Android Studio. Also, note that as of 2018.3, Android Build Support comes with its own Java Runtime based on OpenJDK.
Android Logcat Package is a utility compatible with Unity 2019.1 for displaying log messages coming from Android devices in the Unity Editor, making it easier to debug by controlling and filtering messages right in Unity.
To perform faster iterations during development, you can use the Scripts Only Build option in the Editor. This lets you skip many steps in the build process and recompiles only the scripts, then builds the final package and deploys when you select Build And Run.
We have extended this feature so it lets you patch the app package (APK, Android only) on target devices instead of rebuilding and redeploying it. So when you’re iterating on your C# code, only recompiled libraries are sent to the device. Note that a complete build of the project must be available before Unity can execute a Scripts Only Build.
AR Foundation lets Unity developers quickly get started building AR projects. You choose which features to include in your experiences while building just once to deploy across both ARKit and ARCore devices. Available in Preview from the Package Manager, it wraps ARKit and ARCore low-level APIs into a cohesive framework that also includes additional features to help developers overcome the biggest AR development challenges.
This feature lets you pass data between a device and the Editor so you don’t have to build to the device each time you want to test functionality. It includes Session Recording & Playback, which allows you to record on-device sessions and play them back in the Editor to create robust testing of your application or iterate on visualizations.
Available on GitHub, this collection of Scenes, Prefabs, and helper components is built on AR Foundation to demonstrate how you can do plane visualization, object placement, and more. It also includes an example of how to use LWRP with AR Foundation. It contains all the foundational pieces you need to start building so you can get your AR projects up and running fast.
LWRP support for AR and VR
AR Foundation support for LWRP is available as a Preview package, letting you use LWRP for AR experiences. Also with this release and in the latest LWRP verified package, you can now leverage the Shader Graph, as well as LWRP’s performance optimizations, to build VR experiences for all VR platforms that Unity officially supports. Learn more about our updates to LWRP in the Graphics section.
Stereo rendering mode fallback
When stereo instancing is not supported on the target device, stereo rendering automatically falls back to single-pass (double-wide) rendering. You can now safely use the more performant Stereo Instancing rendering mode without having to worry that the graphics API of a specific device does not support it. With earlier releases, the stereo rendering mode would fall back to multi-pass rendering.
Post-processing support for Stereo Instancing
Included in this release and the latest Post-Processing package, all post-processing effects that are viable for VR now work with the Stereo Instancing rendering mode.
Built-in support for Magic Leap (Lumin OS)
Support for building to Magic Leap One is now included in this release. This means that rather than having to use a special Technical Preview build of Unity for Magic Leap development, you can use this Unity release.
With this release, WebAssembly is the default output format for Unity WebGL. In addition, asm.js has been removed from the Editor UI, which was deprecated in 2018.3. To reflect these changes, the asm.js-specific Use PreBuilt Engine build option is no longer available. WebGL Player Settings have also been updated: Linker Target and Memory Size are set to WebGLLinkerTarget.Wasm and 32MB respectively, and they have been removed from the Editor UI. However, it is still possible to modify these settings via the Editor script.
In this release, we also introduce experimental WebAssembly multithreading, which can be enabled via PlayerSettings.WebGL.threadsSupport. See this forum post for more information.
Unity Editor for Linux is now in Preview. You can get the latest builds from the Unity Hub. There are still some rough edges, but you will continue to see improvements over time. Bringing the Unity Editor for Linux from experimental into Preview means that we are now on a path to a fully supported version by the end of the year. We’re prioritizing support for the following list of configurations:
- Ubuntu 16.04, 18.04
- CentOS 7
- x86-64 architecture
- Gnome desktop environment running on top of X11 windowing system
- Nvidia official proprietary graphics driver and AMD Mesa graphics driver
- Desktop form factors, running on device/hardware without emulation or compatibility layer
We recommend you use one of the supported configurations above for the best development experience.
We have introduced some enhancements to our support for Async Compute for Consoles. We now allow for a greater range of Command Buffer script functions to be valid for Command Buffers targeting async compute queues (for example, functions that set global shader data or manage temporary render targets). We also improved error handling for Command Buffers targeting async compute queues giving immediate feedback if invalid Command Buffer script functions are used; this makes debugging easier for you.
With Command Buffer Chaining & Concatenation, we optimize how work is submitted to the GPU, specifically for native graphics jobs where the previous method incurred a small GPU overhead.
We’ve added support to UWP for ARM64 devices. Simply choose ARM64 for your target architecture and deploy to Windows-based ARM64 laptops.
Based on user data and customer research, the Display Resolution Dialog is “disabled” by default as of this release. You can still enable it via the Display Resolution Dialog drop-down menu in your Project Settings. You’ll find it in Player, Resolution and Presentation, in the Standalone Player Options group. More information about the evolution of the Display Resolution Dialog will be shared in the near future.
In 2018.1, we introduced the Burst Compiler, a new LLVM-based backend compiler technology that takes C# jobs and produces highly optimized machine code for your target platform. With this release, it’s out of Preview and available for production. With the Burst Compiler, you don’t need to do the hard work of low-level coding to get the performance gains that come with hand-tuned assembly languages. You can continue to write your code in C#.
DSPGraph is the new Audio rendering/mixing engine, built on top of Unity’s C# Job System. It’s completely extensible in C# and can be used with the Burst Compiler. In the Megacity project, it powers 100,000 uniquely scattered 3D/spatial sound emitters, including neon signs, air-conditioning fans, and cars, producing a rich, realistic soundscape.
Note that DSPGraph is an internal experimental API that we’re planning to polish and publish as a Preview package later this year. This will be the foundation of the upcoming Data-Oriented Technology Stack audio system (among others). Please join us in the new Data-Oriented Technology Stack audio forum if you’ve been exploring DSPGraph in the context of our Megacity project. It’s the perfect place to ask questions or share your audio needs with us.
Sub Scenes are part of the toolbox created for the Megacity project. This feature bridges the gap between GameObjects and DOTS by using GameObject Scenes as a grouping mechanism for batch-converting GameObjects to entities.
Sub Scenes are especially useful when you work on large-scale projects like Megacity, where millions of GameObjects are converted to entities. Since you only have to work on a limited number of Sub Scenes at a time, your project is much more manageable and performant in the Editor. We refer to this as a “hybrid” workflow. In addition to improving your Editor workflow, you can also use converted Sub Scenes as streaming units. They can be loaded and unloaded during gameplay, and are also loaded asynchronously in the Editor.
You can convert a group of GameObjects into entities by adding the SubScene component to a root GameObject. When you want to edit the GameObjects in a Sub Scene, just open it up and make your changes. The group of GameObjects is converted to entities automatically when you’re done editing and when you close the GameObject SubScene.
Changes made to GameObjects in a Sub Scene don’t affect the root Scene, making it easy for several team members to collaborate on separate SubScenes simultaneously.
The Sub Scenes feature is part of the Entities package, which you can find in the Package Manager. The feature is currently experimental and undocumented, so use it with caution.
In just two months, our ECS team and two artists from our FPS Sample group produced Megacity, a futuristic cityscape alive with flying vehicles, hundreds of thousands of highly detailed game objects, and unique audio sources. They leveraged the Data-Oriented Technology Stack (DOTS), the name for all projects under the Performance by Default banner, including the Entity Component System (ECS), the C# Job System, and the Burst Compiler. Megacity shows how DOTS can be used today for complex productions, starting with Unity 2019.1, and the new Prefab workflows, which were introduced in Unity 2018.3. The demo is now available for download so you can start exploring the opportunities that DOTS provides for your future projects.
If you’re interested in learning how a few developers from Nordeus took on the Megacity demo to show how you can use DOTS and LWRP to easily scale a high-resolution PC project for mobile platforms, read our Nordeus case study. You can download the full project here.
At GDC 2019, we announced our partnership with Havok to build a complete solution for DOTS-based projects in Unity. If your project uses the new DOTS framework, Unity Physics (Preview) is your default physics system. This system is written using the C# DOTS framework and leverages the Burst Compiler and C# Job System to deliver high-performance simulations. By using a stateless design with no caches, the solver is much simpler, allowing us to build a more network-friendly physics system that can be easily extended, tweaked, and modified to fit your production needs. Unity Physics is available for Unity 2019.1 via the Package Manager. In June 2019, we will also be offering the Havok Physics package as an integration for DOTS-based projects with very complex physics simulation needs. It uses the same C# DOTS framework as Unity Physics but is backed by the closed source, proprietary Havok Physics engine written in native C++. We have built both Unity Physics and Havok Physics to use the same data protocol, which means you can author your content and game code once and that data is shared between both systems. This will let you seamlessly swap between both DOTS-based physics solutions or even use them at the same time in your project.
What if my project doesn’t use Data-Oriented Technology Stack?
If you’re currently using the GameObject and MonoBehaviour framework for your projects, then PhysX will be your default physics system. This won’t change, and we will continue to support and evaluate PhysX updates in Unity for GameObject/MonoBehaviour-based projects.
We have added a number of improvements to our PhysX solution for non-DOTS projects. For example, cloth can now apply its own gravity, independent of the scene gravity, for easier tweaks. We have also updated the physics debug view colors for better experience and consistency with the gizmo colors. There is a new section in the Rigidbody component inspector that shows internal information useful for debugging, such as linear and angular velocities, center of mass, and inertia tensor. The new Physics.GetIgnoreCollision function lets you easily check whether the given colliders have collisions disabled or not. The default maximum angular velocity of bodies has been increased from 7 to 50. This will improve simulations with fast-moving objects, as well as the resolution quality of ragdoll collisions in challenging configurations. It will allow the solver to rotate bodies faster and thus satisfy constraints in fewer iterations.
Until now, we only supported raycast in the multi-scene context. With this release, all of the scene queries are now available in the multi-scene context. The Physics Debug view supports multi-physics scenes too, allowing you to see what physics scene a selected object belongs to. You can also view objects belonging to a given physics scene.
We improved runtime performance when script debugging is enabled. Furthermore, the performance of code generated by IL2CPP has been improved by up to 20%.
We’ve improved the integration of Unity’s Profiler with external profilers. Unity’s Development build now generates markers for Android Systrace, allowing you to visualize Unity event-named sections on the system-wide trace Android Systrace tool. You can then analyze your game in the context of OS activity like scheduling, CPU status, and other processes running in the system. Native Systrace support, formerly a plugin, is now part of Unity 2019.1. All managed threads are now visible in both Mono and IL2CPP scripting backends, and all native Unity threads are exposed. Activity on threads is displayed in the Timeline view of the Profiler window, and we also automatically register all threads with the Profiler. As well, we’ve increased the default for allowed memory usage for the Profiler to 4MB in Players and 64MB in the Editor. That allows you to accumulate more data before streaming it out to disk or network, and reduces overhead. You can also control it with “-profiler-maxusedmemory” command-line arguments. Finally, we added the UnityEditor.Profiling.HierarchyFrameDataView API that allows you to quickly traverse CPU profiling data for all threads and obtain all the information available in the Hierarchy view of the Profiler window, together with all the relevant metadata (e.g., GC.Alloc callstacks).
The Profile Analyzer is a new profiling package available in Preview. It complements the Unity Profiler’s single-frame analysis by adding the ability to analyze multiple frames at once. This is useful when it’s important to have a wider view of your performance, such as upgrading Unity versions, testing optimization benefits, or tracking performance as part of your development cycle. It analyzes CPU frame and marker data that is pulled from the active set of frames currently loaded in the Unity Profiler or loaded from a previously saved Profile Analyzer session. The analyzed CPU frame and marker data gets summarized and graphed using histograms, and box and whisker plots, which compliment a sortable list of activity for each marker, including the minimum, maximum, mean, instance count, range, and which frame the marker first appeared in.
In this release, we’re introducing the Incremental Garbage Collector as an experimental alternative to the existing Garbage Collector (GC). The Incremental Garbage Collector is able to split its work into multiple slices. That means that instead of one lengthy interruption of your program, the Incremental Garbage Collector will do multiple, much shorter interruptions. While this will not make the GC faster overall, it can significantly reduce the problem of GC spikes breaking the smoothness of animations in your project because it distributes the workload over multiple frames. To learn more, read our blog post here.
ScriptableObjects are now reloaded during asset importing. This means that if a ScriptableObject is loaded before an import and the underlying asset on disk has been modified, then the ScriptableObject will be reloaded and have the new values from the asset on disk after the import is done. Before this change, the ScriptableObject would have been unloaded after the import, resulting in the ScriptableObject being equal to null when compared using the equality (==) operator. This reloading only happens for ScriptableObjects and for nested Prefabs that are already loaded before a (re-)import. For more information about reloading ScriptableObjects, you can check the code example available here.
With this release, it’s now possible for package developers to conditionally depend on C# code in packages using the new Version Defines feature in the Assembly Definition File inspector.
Missing Assembly Definition File (asmdef) references are now ignored instead of producing a missing-reference error. This allows you to add references to asmdef assemblies that are optional.
Using the new Version Defines features in the Assembly Definition Inspector, you can define which C# preprocessor directives are set based on version ranges for packages and modules that are currently resolved in the project. This allows you to #if your C# code for features in optional packages.
With this release, we’re reintroducing the ability to edit Prefab Assets in the Inspector once you’ve selected the Prefab in the Project view. This means that you won’t have to open up a Prefab in Prefab Mode or drag it to the Scene to edit it.
The Shortcut Manager gives you an interactive, visual interface and a set of APIs to make it easier to manage Editor hotkeys, assign them to different contexts, and visualize existing bindings. To address binding conflicts, you can now visualize whether multiple commands use the same binding and let you remap accordingly. You can see which shortcuts are available by holding down Shift + Control. A list of all reserved and unreserved keys appears. You can also store hotkeys in custom profiles so they can be saved, shared, and migrated to other workstations. A new context system lets you register commands within a specific context for Editor windows. This enables tool developers to define their custom actions and make them available as a shortcut. These shortcuts can be defined as context-aware so that they only become available within the correct context. Finally, you can visualize and address any conflicts between shortcuts. For example, if multiple packages use the same shortcut, the Editor will trigger a notification and offer options for handling the conflict.
We updated the Editor console with clickable stack trace links that take you to the source code line for any function calls listed in the stack, and a search tool to filter your console entries.
The new Quick Search tool makes it easy to search across multiple search providers (e.g., Assets, Hierarchy, Settings) in the Editor. It’s also extensible for developers who want to include their own search providers. The feature is currently in Preview. To learn more about it, see the forums and be sure to share your feedback if you try it.
The new Animation Rigging package gives you more artistic control over your animations. You can use a set of predefined animation constraints to manually build a control rig hierarchy for a generic character. At runtime, rig constraints are converted to a list of Animation jobs that are appended as post-process operations to the controller playable graph. The new Animation Rigging package is based on Animation C# jobs, which enable you to set up safe multithreaded rigs that can procedurally control deformations, simulate pseudo physical behaviors or secondary motion, and correct overall animations.
You can use the library of predefined constraints included in the package to construct different rig setups with distinct purposes, and then dynamically blend them at the appropriate time during gameplay to control final animation throughput.
Finally, the package is extensible so you can write custom constraints tailored to your specific production requirements. For more information, watch our GDC 2019 talk. Join us on the forums to discuss the Animation Rigging package!
The new UI/UX architecture includes many features that help with visibility and searching using Sort & Search and Reveal in the Explorer/Finder. You can now resize the main Hub desktop window, manage your licenses directly from the Hub, and install and run the Hub without activating your Unity license first. This release also includes support for language localization and internationalization and some improvements on how network connectivity checks are handled. For more information, please check out our forum post.
Since you can manage your projects through the Unity Hub now, the built-in project Launcher is no longer included as part of the Editor. This is an important change to how you open/create projects and how licenses are managed within the Editor, so we would appreciate your feedback as we iterate through this transition. Note: The Editor command-line interface is unaffected by this change and will continue to work as expected for project management and license activation. If you haven’t already downloaded/installed it, you can get the latest Hub release here. If you already have the Hub installed, please make sure it’s updated to at least v1.3 (launching/restarting the Hub will trigger the auto-update process).
You can now visualize your packages and core dependencies in the Editor, install a package directly from a GitHub repository, and manage private and Unity-hosted registries side by side. This release also includes support for Assembly definition references (see the Version Defines section). Join the discussion on the Unity Package Manager forum and check the 2019.1 manual for more information.
This is a new retained-mode GUI system that enables developers to quickly create and edit UI layouts and styling. The new GUI system borrows concepts from the web’s CSS, jQuery, HTML DOM, and Events system to make it easier to create and optimize UI in Unity. It also provides improved performance and many new features, including stylesheets and dynamic/contextual event handling. We built the new system with performance and scalability in mind, so it has a conventional and comprehensive C# API that enables developers to build, modify, and interact with the UI. The familiar C# API, Events system, CSS and XML import formats make it easy to build user interfaces. UI Elements replaces IMGUI for extending and creating Editor UI, and will replace uGUI for creating runtime UI in future releases.
You can now manipulate particle data using the C# Job System without having to copy particle data between script and native code. To set it up, create a job struct based on IParticleSystemJob, attach it to the Particle System using SetJob, and it will be called from a thread after the native particle update has executed.
This release includes a number of small improvements to mesh particles. For example, the mesh assigned to each particle can be queried and assigned from a script. The ParticleSystem.Particle struct now contains methods to get/set the mesh index. Custom Vertex Streams has a new Mesh Index stream, allowing you to send the mesh index to a shader. You can use this to write shader code tailored to individual meshes. The Texture Sheet Animation module contains a new Row Mode, which selects the row of the animation based on the mesh index of the particle. That allows you to assign specific animations to each mesh in the effect.
Production-ready in this release, Shader Graph is a node-based visual interface for building shaders. It lets artists easily customize how things look without writing any code. Shader Graph lets you drag and drop nodes to see the results in real-time. The instant feedback also makes debugging and fine-tuning simple, for shader experts and beginners alike. A key new feature of Shader Graph is Nested Sub Graphs, which let you visually create custom nodes. Sub Graphs are nestable too, so you can define custom content libraries for your project or studio. This gives technical artists flexible, non-destructive control over an entire shader pipeline, which fuels experimentation and creativity. Learn more about new Shader Graph features and recommended workflows in this GDC 2019 talk.
LWRP is production-ready in this release. LWRP is a prebuilt Scriptable Render Pipeline (SRP) that is optimized for delivering high graphics-rendering performance. It is highly configurable and allows you to control rendering settings globally or on a per-camera basis. It also gives you the flexibility to set up camera depth and color texture for custom effects, which is integrated with Shader Graph. This highly extensible plug-and-play architecture lets you create custom render passes. You can also override the renderer to achieve specific effects. Using LWRP gives you flexibility and enables rendering scales between platforms. The source code is available on GitHub, allowing you to further customize LWRP. In 2019.1, we’ve also added Dynamic Scaling support, with UI preservation, which helps you keep your UI crisp while rendering your game on mobile devices with high DPI screens. As well, we added support for SRP Batcher and several improvements to Particle Shaders, including Soft Particles and Distortions. There are also improvements to Terrain Shaders as well as the baked Lit Shader. Also new in this release is a Custom Renderer system that enables greater customization. As well, we have added preliminary support for Visual Effect Graph and initial LWRP support is available for Unlit Shaders and limited to compute-capable platforms. Learn more about LWRP in the Unity Manual.
HDRP is a prebuilt, high-fidelity Scriptable Render Pipeline designed to target modern, compute shader-compatible, platforms. By design, it provides you with tools to create anything from games to technical demos at the highest resolution. In this release, we have added several new features and considerably improved the workflow for artists. One of the downsides of these changes is that some previously authored data isn’t compatible with this version and will require reauthoring. To help you upgrade from 2018.3, we created a guide for the migration process. It’s available here. This version supports DX11 and DX12 for PC, Metal for Mac, Vulkan for PC and Linux, Xbox One, and PS4. HDRP will remain in Preview until 2019.3. Note: This release comes with package 5.7.2 of HDRP in the HDRP template. To take advantage of the features listed here, we recommend that you upgrade to package 5.12.0 or above after installing the template.
This release includes improved support for Linux and Vulkan APIs, including fewer artifacts. Some artifacts remain but the overall experience has improved.
Double-wide is a slow path for VR that renders two views side by side. A more optimized, single-pass instanced version will come with 2019.2. All HDRP effects are now supported, including refraction, distortion, subsurface scattering, decals, and volumetrics. For details on the supported features, see this article.
HDRP now uses a Color Buffer format of RGB111110Float instead of ARGBHalf, resulting in faster shader execution and overall improved performance. In 2019.1, we now have a fast path when there is only one directional light and simple materials. To save CPU time, motion vector objects aren’t rendered twice anymore with depth prepass. Distortion has been optimized with a stencil buffer, and shader variant stripping has been improved to reduce build time. 2019.1 also includes support for Software Dynamic resolution, which allows you to render the world at a different scale than the UI on every platform (support for Hardware Dynamic Resolution on supported platforms will come later). You just need to drive your desired resolution via a C# script. Lastly, transparent materials can now use a render-pass name of “Low Resolution” to allow them to be rendered at quarter resolution with similar visual quality. This is useful for improving overdraw performance of large particles.
HDRP now comes with better support of multi-edition for all its UI elements, better documentation, and better tooltips. It also adds support for Multi ViewPort, which lets you render several cameras in the same target to achieve split-screen rendering and other similar behavior. The FrameSettings and the HDRP Asset settings have been refactored for faster computation times and easier edition. There is now information about the impact of HDRP Asset settings on memory and shader variants. Support for the SMAA (SMAA 1X) anti-aliasing method has been added. It provides an in-between method between performance (FXAA) and quality (TAA). The After Post-Process render pass is now available for the Unlit Shader. HDRP renders objects that use this render pass after the post-processing pass, which means that post-processing doesn’t affect them. This is useful for rendering 3D UI, for example.
HDRP now contains a Debug option to freeze the camera for culling but keeps it movable from a rendering point of view, letting you see what is being culled for a given Scene view. There is also a Material PBR validator and an Emissive color override.
This wizard helps you configure project settings to work correctly when using HDRP. It highlights incorrectly configured items and provides Fix buttons to correct them. As well, you can set up new custom scenes.
Decals have been enhanced with better gizmo control, Shader Graph support and Emissive support for opaque decals. The Recorder is now properly supported and allows you to record footage from HDRP. The Fabric lighting model for cotton wool has been improved to reflect the recent research from Sony Pictures Imageworks. Volumetric fog has been optimized and it’s more precise. We have also updated the Gizmo for density volume. Light layers, which allow you to tag lights and objects so only objects with the same tags receive lighting from a specific light, are now fully functional and support the control of shadows correctly. Diffusion Profile for subsurface scattering has changed with 2019.1. Previously, the Diffusion Profiles List in each project was limited to 16 profiles. Diffusion profiles are now individual assets that can be shared/distributed and there is a limitation of 16 profiles per view. The current list of profiles used in a view is controlled with volume settings. 2019.1 automatically migrates data from the old Diffusion Profile system to the new one but not for Shader Graph. You will need to reauthor those diffusion profiles. This version also adds support for Motion Vectors on transparent materials. Transparent materials can write their own velocity, overwriting the previous content of the velocity buffer. This is useful for alpha-ended material like hair.
2019.1 comes with several new Master Nodes for Shader Graph. The new HD Unlit Master Node gives you access to the full feature set that the cross-pipeline Unlit could not, such as distortion or rendering pass selection. There is also a new AxF Master Node designed to support the X-Rite AxF measured material format. AxF Material is only useful when it is coupled with the AxF importer that is part of our Unity Industry Bundle. The AxF importer automatically populates with all the settings for the AxF Material. A new Hair Master Node is also available. It relies on the artist-friendly Kajiya Kay-based lighting model that features better energy conservation and provides more flexibility.
Various new HDRP-specific nodes/behaviors have been added. It’s now possible to sample the Scene Color, including blurred mipmaps, to simulate rough refraction or distortion (color is only available for transparent objects). The Scene Depth node also lets you access raw, linear (between 0 and 1), or eye depth. A Depth Offset input on the Lit Master Node has been added to push the depth inward or outward in the direction of the view vector. This is useful when using the new Parallax Occlusion Mapping node to get shadowing from lights. In addition, all HDRP Master Nodes now support override of the baked GI. To enable it, use the Override Baked GI checkbox in the Master Node settings. It adds two new inputs on the master node: Baked GI and Back Baked GI. This allows you to provide your own baked GI for indirect diffuse lighting and transmission respectively; or, in combination with the Baked GI Node, you can modify it. The default value for the Baked GI property is equivalent to the default output from the Baked GI node.
We have made numerous improvements to lighting. Previously lighting couldn’t use correct real-world/physical values because the exposure range and precision didn’t allow it. The lighting calculation now uses Pre-Exposure. This means that the exposure isn’t applied at the end of the frame during post-processing but is applied to the lighting itself, which greatly improves precision and permits high values for light intensity such as for the sun. In addition, Sky, Emissive, and some Lights now use EV100 units instead of EV, which is the unit usually used for reference values in lighting charts. As this is the biggest lighting discrepancy with 2018.3, upgrading your project to this release means you may have to tweak some light intensities. Emissive properties on both Lit/Unlit Shader and Lit/Unlit Master Node have been improved to support EV100 or Luminance unit, with an additional Exposure Weight control and Emission node. This control allows you to force an object to bloom even when correctly exposed (for example, to allow it to bloom even in bright daylight). Rectangular area lights have been enhanced to support cookie and approximative area shadows. This is a “costly” feature and should be used mainly for high-quality mode or cinematics. Shadow mask support has been added, providing high-quality baked soft shadows while keeping the specular highlight.
The GPU now bakes Reflection Probes, which speeds up baking. As well, Reflection Probes are integrated with the lighting workflow to streamline the Reflection Probe baking process, letting you bake all loaded Reflection Probes from the lighting window.
Support for real-time planar reflection has been added. During playback, HDRP only renders visible real-time Reflection and Planar Probes, which now have individual controls in FrameSettings for both real-time or offline rendering.
In 2019.1, we integrated post-processing directly in HDRP and included a custom set of compute shader-based post-processing effects specially made with performance and quality in mind for high-end console and desktop platforms. This new set of post-processing tools is compatible with the RT Handle system and supports the dynamic resolution features. Note that the new post-processing settings aren’t compatible with Post-Processing Stack V2 (PPv2), which means you have to reauthor all post-processing when you upgrade to 2019.1. It also means that HDRP no longer supports PPv2. Post-process anti-aliasing, which comes with FXAA, SMAA, temporal anti-aliasing, and 8-bit dithering, helps to smooth out gradients and remove 8-bit color banding. You set them directly on the camera. Chromatic aberration, Lens Distortion and Vignette are the same as in PPv2. Film Grain has been reworked to use grain lookup textures instead of procedural noise. We also added a new Panini Projection effect.
Panini Projection effect. Bloom now uses a threshold based on pre-exposed value. This means that only objects that are overexposed will bloom instead of objects that are above a specific intensity. Color Grading has an improved version of the “HDR Grading” mode from PPv2. The large color-grading panel has been split into separate volume components to reduce cluttering the Inspector. Depth of Field has been completely reworked, and now provides parametric aperture shape control, allowing you to easily configure the number of blades, their curvature, barrel clipping, and anamorphism. The effect is now resolution-independent. Motion Blur has also been completely reworked to improve quality and performance. This includes novel algorithmic modifications that permit more precise and wider blurs, while reducing artifacts.
In 2019.1, you can now access the depth and normal buffers from HDRP. As well, visual effects can access internal HDRP rendering buffers such as depth or color for the main camera and use them as input textures during the simulation pass. This lets you easily set up features like depth buffer collision and scene morph with particles.
The Visual Effect Graph is an easy-to-use, flexible node-based system, inspired by the leading tools for VFX in film. It lets you create stunning effects for games and other creative content quickly. In 2019.1, we added several improvements and new features as well as produced various samples to help you get started creating next-gen visual effects. A new Prewarming feature allows you to pre-simulate a portion of an effect up to a certain time, so that it’s in its fully developed state. This can be used for creating effects such as a stack of smoke that has built up over time. We also updated the Light Probes and Light Probe Proxy Volumes. Noise functions have improved with perlin (value and cellular noises, and their curl variations). Spawn time and spawn count operators now enable you to count the number of particles spawned at once in the previous frame.
The GPU Lightmapper is now in Preview, with additional functionality and platform support. It’s enabled on macOS and Linux and supports double-sided GI flags on materials as well as shadow-casting and receiving on meshes.
The GPU Lightmapper now uses the same GPU as the Editor by default to ensure that the high-performance dedicated GPU is used. If necessary, you can change to a different GPU by using the command line; see the documentation for details.
The Optix AI Denoiser is a deep-learning-based denoiser trained on a library of path-traced images. It’s a substantial improvement over filtering options, especially on low sample counts, and is resilient to leaking and blurring. It can be combined with filters to achieve even smoother lightmaps. Using the new Denoiser helps reduce sample counts substantially to achieve much faster bakes than previously possible. It’s currently only available on Windows and with compatible Nvidia GPUs.
MIS Environment is a new method for sampling the most important areas in the cubemap/HDRI. This technique avoids shooting a large number of GI rays into the hemisphere, and instead focuses them on the important areas such as bright spots (like the sun). With this feature, it’s possible to bake scenes with measured HDRI environment maps that are highly non-uniform. A new Environment Samples parameter has been added to the Lighting window. This value controls how many rays are traced directly into the environment per lightmap texel.
Light probe Gizmos are now affected by exposure correction. This makes it easier to iterate on light probes when using HDRP and high-intensity lighting. With the new Limit Lightmap Count parameter, you can specify a maximum number of lightmaps generated for a specific group of objects. This is particularly useful when you’re building games for mobile platforms where resources may be limited. Realtime GI: Async Readback removes the need for the CPU to wait for a GPU read-back. This can improve performance and reduce CPU spikes. Auto Generate is no longer default on new scenes. We have also added a link to the bottom status bar to show if you are in Auto Generate mode or not.
LookDev, an experimental feature for viewing assets, has been removed. We will be improving it and adding it back in for SRP (with LWRP and HDRP support), later this year.
The Unity Editor defers shader compiling until rendering needs a particular shader variant for the first time. However, this could cause Editor stalls because the shader compiler roundtrip can take significant time. The new Async Shader Compilation feature eliminates these stalls by decoupling the compiling from the rendering and uses a plain cyan-color dummy shader as a replacement until the compilation has finished. This is an Editor-only feature and doesn’t affect your game, but just speeds up your compile times.
High-influence skin weights
You can now have 32 bone influences per vertex, and up to 255 with API for skinned mesh renderers. This keeps skin weights’ fidelity consistent at runtime with source content in external programs. This is especially useful for bone-based face-rigging where areas of high detail, such as mouth corners and eyes, require more than 4 bone influences. This will also raise the quality of rigs that use smooth-skinning decomposition so that you can achieve a smoother result with fewer bones.
The Sketchup Importer now features an updated UI. We also added support for Camera import for all camera types available in Sketchup (orthographic, perspective, and two-point). Finally, we improved the performance of the Model importer Material Tab UI.
Timeline Signals offers an easy way for Timeline to interact with objects in a scene. Using a signal emitter and a signal asset, you can trigger a signal receiver that will define a set of preconfigured reactions (Unity Events) to your Timeline. Signal emitters are used to trigger a change in state of the scene when the timeline passes a given point in time.
We chose the word “signals” instead of “events” because “signals” supports the idea of “broadcast”; it also avoids confusion with the existing Unity Events and Animation Events.
We have also introduced “markers” for users who are interested in creating custom keyframes with specific behaviors. You can use markers to add and manipulate objects on a timeline the same way as clips: select, copy-paste, edit modes, etc. Markers also have specializations, just like animation clips, activation clips, control clips, etc. See this forum to learn how to create your own custom markers.
We have added a number of audio improvements to Timeline. For example, you can now control/set keys for an individual audio track while working on a Timeline edit. You now also have simple volume and pan controls on a track, as well as Volume and Pan animation per track.
This release introduces support for the H.265 video codec. This allows you to play H.265 movies and transcode other supported video formats into H.265 codec format.
H.265, or High Efficiency Video Coding (HEVC), is a video compression standard that is the successor to Advanced Video Coding (H.264). In comparison to H.264, H.265 provides better quality at the same bit rate.
SRP is now fully supported in our Video Player and Video Recorder. This enables the Video Player to play back videos when rendering via HDRP or LWRP. As well, the Video Recorder was updated to handle input from cameras when you use SRP. In the process of adding support to SRP in the Video Player, we also fixed some major bugs impacting the camera render modes (far/near plane) and also fixed 360 video (stereo) support.
You can use Unity’s SceneVis controls to quickly hide and show objects in the Scene view, without changing the objects’ in-game visibility. As a scene becomes more detailed, it often helps to temporarily hide specific objects; the new Isolate mode lets you view and edit without obstructions. SceneVis enables this functionality via hierarchy tools and keyboard shortcuts, plus a toolbar toggle that lets you quickly enable or disable the effects.
The Sprite Shape package features many improvements. For example, you can make the Sprite Shape’s final 2D collider closer in shape to the sprite’s visual representation, allowing you to add polygon and edge colliders that more closely fit the Sprite Shape Renderer.
We also added non-mirrored continuous tangents for shape control points, permitting you to create curved shapes to help you more precisely achieve your desired look.
When you use the new 2D Animation package (v2.1, accessible via the Package Manager), you’ll notice a performance boost when you’re skinning a sprite in the Editor and also at runtime, because it’s built with the C# Job System and the Burst Compiler. This update also improves performance when you have several characters on the screen that are animated using this tool.
Note: This version isn’t backwards-compatible with 2018.3. For projects that use Unity 2018.3, please continue using the 2D Animation package v2.0.
All 2D physics queries now allow you to provide a results buffer as a .Net “List<T>,” when previously you needed an array. This has the same advantage as the array, in that no memory is allocated if the capacity of the list is large enough to contain the query results. It provides the added advantage that the list capacity will automatically increase (with the associated memory allocation) to ensure it can contain all query results while only allocating the required memory. If you reuse the same list, allocations will be kept to a minimum and/or eventually no more allocations will occur.
Unity releases comprise a TECH stream and a Long-Term Support (LTS) stream.
The TECH stream, which includes all the latest features, has three major releases a year. This year’s TECH streams are versions 2019.1, 2019.2, and 2019.3, each of which adds new features and functionality.
2018.3 was the last in the 2018.x TECH stream cycle, and with this release, it becomes part of the LTS stream with a new version number (2018.4). This marks the point at which the two-year support schedule begins.
Unlike the TECH stream, the LTS stream will not have any new features, API changes or improvements. Instead, it will address crashes, regressions, and issues that affect the wider community, console SDK/XDKs, or any major issues that would prevent a large number of developers from shipping their games.
There is a weekly TECH stream release with bug fixes, while there is a biweekly LTS stream release with bug fixes. The LTS stream is for developers who wish to continue to develop and ship their games/content on a stable version for an extended period. The TECH stream, on the other hand, is for developers who want to use the latest features and have access to the latest Unity capabilities.
Want early access to more new features? Get the 2019.2 alpha!
First of all, a special thanks to our beta community for using all the new features and providing great feedback, which helped us finalize and ship 2019.1.
If you are not already a beta tester, consider becoming one. You’ll get early access to the latest new features and you can ensure that your project will be compatible with the new version. You can also help to influence the future of Unity by sharing your feedback with our R&D teams in our forums or in person. Additionally, you’ll have the opportunity to get invited to Unity events and to win exclusive swag.
Start by downloading our latest alpha or beta and have a look at this guide for how to be an effective beta tester. If you would like to receive occasional emails with beta news, updates, and tips & tricks, please sign up here.
The full release includes almost 300 new features and improvements – too many to mention here. You can always find the full list of new features, improvements and fixes in the release notes.