Search Unity

This is a heads-up of graphics related things we plan to “drop” from future versions of Unity, a.k.a. Dropping of Things for Make Benefit Glorious Future of Rendering.

Sometimes when we try to make things better (better looking, faster, more flexible, etc.), we find that some old corner case or old hardware limitations get in the way. In general, we try to keep as much as possible working between versions of Unity, but sometimes the potential benefit of removing legacy functionality is just too great and said functionality hopefully affects very few people (if any).

So without further ado, here’s a tentative list of graphics related “it will stop working” items. Note that all these are “planned” and not actually done yet, so if you really, really need them to keep on working forever, let us know!

Shaders: Drop support for “precompiled” shader assets

A “precompiled” shader is the one that effectively comes “without source code”. Instead of having readable HLSL code in there, these shaders contain already compiled shader assembly or microcode or translated shader code for several platforms.

One problem with “precompiled” shaders (if you got them from somewhere) is that they will not work on platforms that might appear in the future. Say you’ve got a shader that was precompiled for DX9, OpenGL and OpenGL ES 2.0. This shader will not work on consoles, Metal, DX11 and so on! Hence using precompiled shaders is typically a bad practice for this reason alone.

Another reason why we want to remove support for them is because we want the shader serialization format to be much more efficient in disk storage, load times and runtime memory usage. The shader format we had so far was, shall we say, fairly inefficient text-based format that resulted in long shader load times and high memory usage. In our current experiments, we’re seeing big reductions in both of these (megabytes to dozens of megabytes saved depending on shader variant complexity, etc.) by changing to more efficient shader data format. However, that makes these “precompiled with old version of Unity” shaders not work anymore. We think that’s a fair tradeoff.

Advantages:

  • Shaders take up less space in your game data files (multiple times smaller).
  • Shaders load much faster, and especially the “hiccups on the main thread” while loading them asynchronously are much smaller.
  • Shaders take up a lot less memory at runtime.
  • “Show compiled code” in shader inspector will display actual shader disassembly on DX11, instead of a not-very-usable sea of vowels.

Disadvantages:

  • Precompiling your shaders (“show compiled code” from shader inspector) and then later on using that code directly will stop working.

Affects: People who precompile shaders, and people who got precompiled shaders from someone else.

When: Unity 5.3 (2015 December)

Hardware: Drop support for DirectX 9 Shader Model 2.0 GPUs

DX9 SM2.0 GPUs are fairly old and we’d like to drop support for them! This would mean that these GPUs would stop working in your Unity games: NVIDIA before 2004 (pre-GeForce 6000), AMD before 2005 (pre-Radeon X1000) and Intel before 2006 (pre-GMA X3000/965). In short, GPUs older than 10 years or so would stop working. Looking at the data, it seems that it’s only Intel GMA 950 aka 82945 GPU that is still sometimes found in the wild these days — so that one would stop working.

Note that we’re not dropping support for DirectX 9 as a whole! Often that is still the only practical option on Windows XP, which just isn’t going away… DirectX 9 rendering support (on Shader Model 3.0 or later GPUs) will continue to be in Unity for quite a while.

Advantages of doing this:

  • Less hassle for people writing shaders. Currently, all newly created shaders in Unity are compiled to “lowest common denominator” by default (shader model 2.0) and if you want any of more advanced features (vertex textures, dynamic branching, derivatives, explicit LOD sampling etc.), you need to add things like “#pragma target 3.0” etc. If we’d drop SM2.0 support, the minimum spec goes up and you don’t have to worry about it as much.
  • Way, way less hassle for us internally at Unity. You don’t want to know, for example, how much time we’ve spent on trying to cram Unity 5 physically based shaders into DX9 SM2.0 fallbacks. We could be doing actually useful stuff in that time!

Disadvantages:

  • Unity games would no longer work on Intel GMA 950 / 82945 GPU.

Affects: Windows standalone player developers.

When: Unity 5.4 (2016 March).

Hardware: Drop support for Windows Store Apps DX11 feature level 9.1 GPUs

Almost all Windows Store Apps devices are at least DX11 feature level 9.3 capable (all Windows Phone devices are). But there were one or two devices in the past that only supported feature level 9.1, so that dragged down the minimum spec that we had to support.

Advantages of doing this:

  • All WSA/WP8 shaders will be compiled to feature level 9.3, instead of 9.1, gaining some more functionality that wasn’t working previously (multiple render targets, derivative instructions in pixel shaders etc.).
  • We get to remove quite some code that had to deal with 9.1 limitations before.

Disadvantages:

  • Your Windows Store Apps would no longer support 9.1 devices (in practice this pretty much means “Surface RT tablet”). Note that Windows Phone is not affected, since all phones have at least 9.3 support.

Affects: Windows Store Apps developers.

When: Unity 5.4 (2016 March).

Shaders: Drop support for “native shadow maps” on Android OpenGL ES 2.0

Shadow mapping can be done using either “native GPU support for it” (sampling the shadowmap directly returns the “shadow value”, possibly also using hardware PCF filtering), or “manually” (sample depth from the shadowmap, compare with surface depth to determine whether in or out of shadow).

The first form is usually preferred, especially since many GPUs can provide 2×2 PCF filtering “for free”. On majority of platforms, we know ahead of time which of the shadow modes they support, however Android OpenGL ES 2.0 was the odd one, since some devices support “native shadow maps” (via EXT_shadow_samplers extension), but some other devices did not. This meant that for any shadow related shader, for Android ES 2.0 we’d have to compile and ship two variants of the shader to cover both cases.

However, we looked at the data and it seems that support for EXT_shadow_samplers on Android is extremely rare (1-2% of all devices). So we think it’s worth it to just remove support for that; we’d just always treat Android ES 2.0 as “manual depth comparison for shadows” platform.

Advantages of doing this:

  • Less shader variants to compile, ship and load at runtime on Android ES 2.0.

Disadvantages:

  • About 1% of Android ES 2.0 devices would no longer do hardware shadow PCF sampling, but instead do a slightly slower depth comparison in the shader. Note, however, that all these devices can use OpenGL ES 3.0 which has built-in PCF, so it’s better to include support for that!

Affects: Android developers targeting OpenGL ES 2.0.

When: Unity 5.4 (2016 March).

53 replies on “Plans for Graphics Features Deprecation”

Delighted to hear about precompiled shader deprecation for the side-reason that it’s impossible to tell when some AssetStore shaders are actually precompiled shaders until you’ve purchased them (you can’t tell by looking at the asset’s file list the same way you can looking for DLLs to check for precompiled binaries.) Looking forward to these assets no longer working (although some of them will doubtless move to intentional obfuscation of the HLSL.)

When i playing ifscl game (created with unity) its always not responding. The developer was say «your graphics card is to weak» but i dont know what is graphics card. So what is grapichs card and how i fix that problem?

I like the current direction of Unity. I always had the feeling that Unity cares too much on lower end and lower important techniques. With Unity 5 I am looking more confident in the future of high definition desktop development with Unity.

does that mean on opengl es 2.0 devices, dynamic shadows have different settings, how will it actually affect performance?

While i support this decision to push unity graphic for the future,
what i’m curious is how is it gonna affect mobile developer?
Say GearVR for example, will it still runs Unity games?

[…] Plans for Graphics Features Deprecation – az 5.3-as verzióval a Unity engine nyugdíjba küld néhány feature-t. […]

The Dx9 drop part scared me. Then noticed it will still be available. On my machine Dx11 is very slow in Unity. I can run Dx11 games great but Unity runs 2x slower. So I have to use Dx9.

I love cleanup like this, old devices sometimes slow our progress without clear advantage. Is there a graph or a chart that shows which GPUs are mostly used in the wild these days?.

Cheers.

Thanks Aras – cleaning the old before adding the new (and discuss it in advance) is good thing!

One maybe offtopic question – Unity 5 Standard Shader – will there be finally a way directly in editor to choose individually (per texture) the UV channel (example UV0 for diffuse and UV1 for AO) – feature needed and promissed long ago…?

The same question applies to the option to change the tiling settings individually per texture (as was possible in older versions)…

These limitations are very unfortunate and seem so easy to fix… at least to a naive amateur :-)

How about you deprecate some very persistent bugs as well? Like the spotlight shadows that stop drawing WAY to soon when you get closer to an object. Related to graphics.
It’s a 6 year old bug that makes spotlights virtually useless if you want them to cast shadows.

Good idea to ask about removing functions before you actually do it unannounced.

Lightmapping.LockAtlas (since Unity3) and LighProbes.asset file (Unity4) has been removed in Unity5 without warning – killing useful Lightmapping techniques (changing Lightmaps/Lightprobes at runtime) for mobile devices without warning. Any chance of getting them back? See this

I love my Surface RT. I guess it’s time for an upgrade.

Will the Surface RT 2 (Tegra 4) continue to be supported?

hey, is «Windows Store Apps DX11 feature level 9.1 »
a baseline requirement still for WSA applications?

Or has this changed? If not, then how do we target 9.1 level to clear the WACK?

From what I read you have to keep an old version of Unity around to do that but maybe I misinterpreted what was said by the blog article author.

I’m actually a person that still uses a Intel GMA 950 but am patiently awaiting a Skylake Mac Mini. Until then, I am going to buy a used i5 2nd gen because I’ve found the GMA 950 based Mac Mini even struggles to browse on eBay. That’s the other interesting I found out is my computer struggles more with internet browsers than with Unity. I think it has something to do with multiple instances of flash on an ad heavy web site like eBay but ain’t sure.

Personally, I plan on only publishing using DirectX 12 and Vulkan capable when it becomes available, and when publishing, publish to machines that support Android 4.4, openGL 3.0, and DirectX 11, although I know that I could publish to lower specifications I also know that that creates support scenarios I don’t have the HW for.

Really glad to see that you guys are starting to push through old Unity Engine innards and clean things up, tremendously support this initiative and I hope that you guys will work on some of the more hard ingrained weaknesses / quirks of the engine, too.

Keep going.
If something goes wrong I can export my project as unitypackage and import it back to 5.3 to wait for fix

Yay to all of them! Great stuff Aras. Especially the first one coming in 5.3, the new shader format, would _greatly_ help out some projects I’m involved in. Those hickups have been maddening! Thanks for the blog :)

A graph-based shader tool is as good as the guy who wrote it, plain and silmpe… There has been many attempts, but not many graph-based shader tools exist or are being used effectively. I can agree that some graph-based tools are clunky, generate ugly and bad performing code, and make it harder on the developer to take full control of the output. This doesn’t have to be the case!I’ve been developing a graph-based tool for 8 years now called ShaderWorks which was pulled in by Activision a long time ago. I’ve been beating my head against a wall from day one, I knew this type of tool was very very powerful, but was on the fence about how well of a code generator and shader prototyping power it could be.Only until version 4.0 have I jumped completely on the «graph-based» side of the fence. The main things holding me back was that the code being generated was inherently horrid to look at and the template code which defined the graph/block/node and shader code was just too annoying to deal with. Even though not many have this figured out, I can honestly say it is possible to fix.In most generators, there tends to be a ton of variable indirection and redundancies which make the generated code ugly ( if not done right ) but regardless these things are silmpe for the compiler to optimize away.Another concern was weather or not to generate nested code ( all in one shader function ) from the graph blocks or to generate per-block functions. Either way seems to generate comparable performance but I chose function based as it keeps things very readable and clean and it lets the function inputs/outputs rename the variable names so everything ties in better with the templates.A well done code generator can actually do a perfect job at packing / unpacking interpolator registers and making that part SO SO much easier to manage and change. So with those things covered, what is left that could be slower compared with hand written shaders? Not much since the code written in the graph templates are written by us, the developer, the same people writing the hand coded shaders, so any per shader optimization can easily be done in template, afterwards manually in your very clean shader output

Actually, that will break some of our shaders. We’re currently using a hack to get custom depth write working for our compiled surface shaders, since Unity doesn’t make it possible to write custom depth in a surface shader. So we changed the compiled code, and it works for us. (We even made a tool that automatically adds depth property to any compiled surface shader). But that will stop working. :/

It wouldn’t be a problem if Unity started to support custom depth write in surface shaders. Then we wouldn’t have had to hack it at all to begin with.

Comments are closed.