Search Unity

This is a heads-up of graphics related things we plan to “drop” from future versions of Unity, a.k.a. Dropping of Things for Make Benefit Glorious Future of Rendering.

Sometimes when we try to make things better (better looking, faster, more flexible, etc.), we find that some old corner case or old hardware limitations get in the way. In general, we try to keep as much as possible working between versions of Unity, but sometimes the potential benefit of removing legacy functionality is just too great and said functionality hopefully affects very few people (if any).

So without further ado, here’s a tentative list of graphics related “it will stop working” items. Note that all these are “planned” and not actually done yet, so if you really, really need them to keep on working forever, let us know!

Shaders: Drop support for “precompiled” shader assets

A “precompiled” shader is the one that effectively comes “without source code”. Instead of having readable HLSL code in there, these shaders contain already compiled shader assembly or microcode or translated shader code for several platforms.

One problem with “precompiled” shaders (if you got them from somewhere) is that they will not work on platforms that might appear in the future. Say you’ve got a shader that was precompiled for DX9, OpenGL and OpenGL ES 2.0. This shader will not work on consoles, Metal, DX11 and so on! Hence using precompiled shaders is typically a bad practice for this reason alone.

Another reason why we want to remove support for them is because we want the shader serialization format to be much more efficient in disk storage, load times and runtime memory usage. The shader format we had so far was, shall we say, fairly inefficient text-based format that resulted in long shader load times and high memory usage. In our current experiments, we’re seeing big reductions in both of these (megabytes to dozens of megabytes saved depending on shader variant complexity, etc.) by changing to more efficient shader data format. However, that makes these “precompiled with old version of Unity” shaders not work anymore. We think that’s a fair tradeoff.

Advantages:

  • Shaders take up less space in your game data files (multiple times smaller).
  • Shaders load much faster, and especially the “hiccups on the main thread” while loading them asynchronously are much smaller.
  • Shaders take up a lot less memory at runtime.
  • “Show compiled code” in shader inspector will display actual shader disassembly on DX11, instead of a not-very-usable sea of vowels.

Disadvantages:

  • Precompiling your shaders (“show compiled code” from shader inspector) and then later on using that code directly will stop working.

Affects: People who precompile shaders, and people who got precompiled shaders from someone else.

When: Unity 5.3 (2015 December)

Hardware: Drop support for DirectX 9 Shader Model 2.0 GPUs

DX9 SM2.0 GPUs are fairly old and we’d like to drop support for them! This would mean that these GPUs would stop working in your Unity games: NVIDIA before 2004 (pre-GeForce 6000), AMD before 2005 (pre-Radeon X1000) and Intel before 2006 (pre-GMA X3000/965). In short, GPUs older than 10 years or so would stop working. Looking at the data, it seems that it’s only Intel GMA 950 aka 82945 GPU that is still sometimes found in the wild these days — so that one would stop working.

Note that we’re not dropping support for DirectX 9 as a whole! Often that is still the only practical option on Windows XP, which just isn’t going away… DirectX 9 rendering support (on Shader Model 3.0 or later GPUs) will continue to be in Unity for quite a while.

Advantages of doing this:

  • Less hassle for people writing shaders. Currently, all newly created shaders in Unity are compiled to “lowest common denominator” by default (shader model 2.0) and if you want any of more advanced features (vertex textures, dynamic branching, derivatives, explicit LOD sampling etc.), you need to add things like “#pragma target 3.0” etc. If we’d drop SM2.0 support, the minimum spec goes up and you don’t have to worry about it as much.
  • Way, way less hassle for us internally at Unity. You don’t want to know, for example, how much time we’ve spent on trying to cram Unity 5 physically based shaders into DX9 SM2.0 fallbacks. We could be doing actually useful stuff in that time!

Disadvantages:

  • Unity games would no longer work on Intel GMA 950 / 82945 GPU.

Affects: Windows standalone player developers.

When: Unity 5.4 (2016 March).

Hardware: Drop support for Windows Store Apps DX11 feature level 9.1 GPUs

Almost all Windows Store Apps devices are at least DX11 feature level 9.3 capable (all Windows Phone devices are). But there were one or two devices in the past that only supported feature level 9.1, so that dragged down the minimum spec that we had to support.

Advantages of doing this:

  • All WSA/WP8 shaders will be compiled to feature level 9.3, instead of 9.1, gaining some more functionality that wasn’t working previously (multiple render targets, derivative instructions in pixel shaders etc.).
  • We get to remove quite some code that had to deal with 9.1 limitations before.

Disadvantages:

  • Your Windows Store Apps would no longer support 9.1 devices (in practice this pretty much means “Surface RT tablet”). Note that Windows Phone is not affected, since all phones have at least 9.3 support.

Affects: Windows Store Apps developers.

When: Unity 5.4 (2016 March).

Shaders: Drop support for “native shadow maps” on Android OpenGL ES 2.0

Shadow mapping can be done using either “native GPU support for it” (sampling the shadowmap directly returns the “shadow value”, possibly also using hardware PCF filtering), or “manually” (sample depth from the shadowmap, compare with surface depth to determine whether in or out of shadow).

The first form is usually preferred, especially since many GPUs can provide 2×2 PCF filtering “for free”. On majority of platforms, we know ahead of time which of the shadow modes they support, however Android OpenGL ES 2.0 was the odd one, since some devices support “native shadow maps” (via EXT_shadow_samplers extension), but some other devices did not. This meant that for any shadow related shader, for Android ES 2.0 we’d have to compile and ship two variants of the shader to cover both cases.

However, we looked at the data and it seems that support for EXT_shadow_samplers on Android is extremely rare (1-2% of all devices). So we think it’s worth it to just remove support for that; we’d just always treat Android ES 2.0 as “manual depth comparison for shadows” platform.

Advantages of doing this:

  • Less shader variants to compile, ship and load at runtime on Android ES 2.0.

Disadvantages:

  • About 1% of Android ES 2.0 devices would no longer do hardware shadow PCF sampling, but instead do a slightly slower depth comparison in the shader. Note, however, that all these devices can use OpenGL ES 3.0 which has built-in PCF, so it’s better to include support for that!

Affects: Android developers targeting OpenGL ES 2.0.

When: Unity 5.4 (2016 March).

53 Comments

Subscribe to comments

Comments are closed.

  1. Delighted to hear about precompiled shader deprecation for the side-reason that it’s impossible to tell when some AssetStore shaders are actually precompiled shaders until you’ve purchased them (you can’t tell by looking at the asset’s file list the same way you can looking for DLLs to check for precompiled binaries.) Looking forward to these assets no longer working (although some of them will doubtless move to intentional obfuscation of the HLSL.)

  2. When i playing ifscl game (created with unity) its always not responding. The developer was say “your graphics card is to weak” but i dont know what is graphics card. So what is grapichs card and how i fix that problem?

  3. I like the current direction of Unity. I always had the feeling that Unity cares too much on lower end and lower important techniques. With Unity 5 I am looking more confident in the future of high definition desktop development with Unity.

  4. native shadow map? so what are we using for shadows in opengl e.s. 2.0, won’t it become much more slower?

    1. Aras Pranckevičius

      September 3, 2015 at 7:13 pm

      Dynamic shadows might become a tiny bit worse on about 1% of Android devices, if you’re using OpenGL ES 2.0. OpenGL ES 3.0 is not affected, and the rest of 99% of devices (that don’t support “native shadowmaps” on OpenGL ES 2.0 anyway) are not affected.

      The upside is smaller shaders, shorter game build times, slightly lower memory consumption and slightly improved load times.

  5. does that mean on opengl es 2.0 devices, dynamic shadows have different settings, how will it actually affect performance?

  6. While i support this decision to push unity graphic for the future,
    what i’m curious is how is it gonna affect mobile developer?
    Say GearVR for example, will it still runs Unity games?

    1. Aras Pranckevičius

      August 31, 2015 at 6:05 am

      Yes, GearVR will continue working just fine, as well as everything else Android.

      1. Great to hear that, cannot wait what you guys gonna bring into Unity in the future.

  7. Since the support is being dropped since 5.3 gma 950 will run Unity 5.2?

    1. Aras Pranckevičius

      August 30, 2015 at 1:17 pm

      The plan is to _maybe_ drop Intel GMA 950/945 starting with Unity 5.4. So of course Unity 5.2 and 5.3 will still support it.

  8. The Dx9 drop part scared me. Then noticed it will still be available. On my machine Dx11 is very slow in Unity. I can run Dx11 games great but Unity runs 2x slower. So I have to use Dx9.

  9. does this cost money to download

  10. I love cleanup like this, old devices sometimes slow our progress without clear advantage. Is there a graph or a chart that shows which GPUs are mostly used in the wild these days?.

    Cheers.

  11. I love seeing clean-up like this. Thanks for the well written explanation, Aras.

  12. Thanks Aras – cleaning the old before adding the new (and discuss it in advance) is good thing!

    One maybe offtopic question – Unity 5 Standard Shader – will there be finally a way directly in editor to choose individually (per texture) the UV channel (example UV0 for diffuse and UV1 for AO) – feature needed and promissed long ago…?

    The same question applies to the option to change the tiling settings individually per texture (as was possible in older versions)…

    These limitations are very unfortunate and seem so easy to fix… at least to a naive amateur :-)

  13. How about you deprecate some very persistent bugs as well? Like the spotlight shadows that stop drawing WAY to soon when you get closer to an object. Related to graphics.
    It’s a 6 year old bug that makes spotlights virtually useless if you want them to cast shadows.

    1. Aras Pranckevičius

      August 28, 2015 at 11:43 am

      Do you have a case number?

      1. Issue ID 614619

        1. Aras Pranckevičius

          September 1, 2015 at 1:05 pm

          That bug is about some GrabPass issues? Whereas the original comment here is something about spotlights…

      2. The OP comment was about old rendering bugs, like the spotlight problem (which I have never encountered). That’s why I posted one of this old rendering bugs – grabpass problem.

        1. Aras Pranckevičius

          September 3, 2015 at 7:15 pm

          Fair enough. We’re actively combing through a lot of old bugs in the past few months. Will get to that one too, hopefully soon :)

  14. Good idea to ask about removing functions before you actually do it unannounced.

    Lightmapping.LockAtlas (since Unity3) and LighProbes.asset file (Unity4) has been removed in Unity5 without warning – killing useful Lightmapping techniques (changing Lightmaps/Lightprobes at runtime) for mobile devices without warning. Any chance of getting them back? See this

  15. I love my Surface RT. I guess it’s time for an upgrade.

    Will the Surface RT 2 (Tegra 4) continue to be supported?

    1. Aras Pranckevičius

      August 28, 2015 at 8:18 am

      No, Tegra4 is also a feature level 9.1 device (not quite sure why honestly, I think it should be capable of 9.3 feature level just fine, just perhaps NVIDIA never bothered to implement that in the drivers?)

      1. Dropping support for Surface 1 and Surface 2 seems like a big step. I can understand Surface 1, but I’m sure tons of people out there still have a Surface 2. This seems a bit aggressive to be honest. Surface 2 is a fairly new device.

        1. Aras Pranckevičius

          August 28, 2015 at 10:08 am

          That’s still only “plan” that we might revert, but it seems that MS itself has effectively “dropped” it, i.e. it’s not getting Windows 10 etc.?

        2. Surface Pro has feature level 11.0, Surface Pro 2: 11.1. It seems that the standard versions of the Surface were brutally downgraded. It was a very mean and simply bad move of the Microsoft.

          Btw, I knew some engine developers who tried to port their engine from Android to standard Surface 1. They had to rewrite the most of the graphic features, and now I know why.

      2. Ok, thanks for the clarificaiton. Maybe I’ll pickup a Surface 4 or Surface 4 Pro (when they come out).

        I have an nVidia Shield Portable (also Tegra 4 based) and it’s sad that it’s not going to work with Unity in the near future.

        1. Are you sure about that? Sure, the nVidia Shield Portable has a Tegra 4, but it’s an Android device, so it wouldn’t be running Windows Store apps anyway.

        2. Aras Pranckevičius

          August 30, 2015 at 6:55 pm

          That’s Android, and it’s not affected.

  16. Tristan Bellman-Greenwood

    August 28, 2015 at 3:49 am

    Sounds good. Can’t wait for December!

  17. hey, is “Windows Store Apps DX11 feature level 9.1 ”
    a baseline requirement still for WSA applications?

    Or has this changed? If not, then how do we target 9.1 level to clear the WACK?

    1. From what I read you have to keep an old version of Unity around to do that but maybe I misinterpreted what was said by the blog article author.

  18. I’m actually a person that still uses a Intel GMA 950 but am patiently awaiting a Skylake Mac Mini. Until then, I am going to buy a used i5 2nd gen because I’ve found the GMA 950 based Mac Mini even struggles to browse on eBay. That’s the other interesting I found out is my computer struggles more with internet browsers than with Unity. I think it has something to do with multiple instances of flash on an ad heavy web site like eBay but ain’t sure.

    Personally, I plan on only publishing using DirectX 12 and Vulkan capable when it becomes available, and when publishing, publish to machines that support Android 4.4, openGL 3.0, and DirectX 11, although I know that I could publish to lower specifications I also know that that creates support scenarios I don’t have the HW for.

  19. Really glad to see that you guys are starting to push through old Unity Engine innards and clean things up, tremendously support this initiative and I hope that you guys will work on some of the more hard ingrained weaknesses / quirks of the engine, too.

    1. Aras Pranckevičius

      August 27, 2015 at 4:21 pm

      That’s the plan!

      1. Robert Cummings

        August 27, 2015 at 8:04 pm

        Brilliant, so pleased Unity is forward-thinking here. I mean, yeah! hell, you’ll actually have TIME to do cool things we all want :D

      2. Sounds like a good plan. People need to realize that this reduction will also allow Unity to work better on more devices because of …. less bugs!

  20. Keep going.
    If something goes wrong I can export my project as unitypackage and import it back to 5.3 to wait for fix

    1. Just make a backup of your project before testing it with new versions of unity.

  21. Yay to all of them! Great stuff Aras. Especially the first one coming in 5.3, the new shader format, would _greatly_ help out some projects I’m involved in. Those hickups have been maddening! Thanks for the blog :)

    1. A graph-based shader tool is as good as the guy who wrote it, plain and silmpe… There has been many attempts, but not many graph-based shader tools exist or are being used effectively. I can agree that some graph-based tools are clunky, generate ugly and bad performing code, and make it harder on the developer to take full control of the output. This doesn’t have to be the case!I’ve been developing a graph-based tool for 8 years now called ShaderWorks which was pulled in by Activision a long time ago. I’ve been beating my head against a wall from day one, I knew this type of tool was very very powerful, but was on the fence about how well of a code generator and shader prototyping power it could be.Only until version 4.0 have I jumped completely on the “graph-based” side of the fence. The main things holding me back was that the code being generated was inherently horrid to look at and the template code which defined the graph/block/node and shader code was just too annoying to deal with. Even though not many have this figured out, I can honestly say it is possible to fix.In most generators, there tends to be a ton of variable indirection and redundancies which make the generated code ugly ( if not done right ) but regardless these things are silmpe for the compiler to optimize away.Another concern was weather or not to generate nested code ( all in one shader function ) from the graph blocks or to generate per-block functions. Either way seems to generate comparable performance but I chose function based as it keeps things very readable and clean and it lets the function inputs/outputs rename the variable names so everything ties in better with the templates.A well done code generator can actually do a perfect job at packing / unpacking interpolator registers and making that part SO SO much easier to manage and change. So with those things covered, what is left that could be slower compared with hand written shaders? Not much since the code written in the graph templates are written by us, the developer, the same people writing the hand coded shaders, so any per shader optimization can easily be done in template, afterwards manually in your very clean shader output

  22. Actually, that will break some of our shaders. We’re currently using a hack to get custom depth write working for our compiled surface shaders, since Unity doesn’t make it possible to write custom depth in a surface shader. So we changed the compiled code, and it works for us. (We even made a tool that automatically adds depth property to any compiled surface shader). But that will stop working. :/

    It wouldn’t be a problem if Unity started to support custom depth write in surface shaders. Then we wouldn’t have had to hack it at all to begin with.

    1. Aras Pranckevičius

      August 27, 2015 at 3:15 pm

      How are you changing the shaders? At the bytecode level (e.g. on DX11) directly?!

      If you’re just using the generated *HLSL* code and modifying it (i.e. it’s still the HLSL code that you end up with), then that of course will keep on working.

      What will stop working is if your shader file contains things like “SubProgram d3d11 { vs_4_0 eefiecedelnegmb…” etc.

      1. Oh, right. I misunderstood. Nothing to fear then. :)

        1. (continues) – sorry, lots to say :)Some argue these applications are not fxebille enough for many types of techniques? Well again, if you have a good code generator, the output is as fxebille as your material graph / template system makes it. You can go complex with your system, or have whole shaders handled mostly within few blocks / templates.With performance dealt with, a graph-based shader tool can generate you MUCH better looking, organized and CONSISTANT shader code than what most coders churn out if given the time and MUCH more understandable than any uber-shader since the option filtering is done in the code generation pass, not the pre-processor pass( mostly ).Alright now on to the template language. Coding HLSL is one thing, having to deal with horrible template code is a whole new can of worms. Yes this could be the case ( was for earlier version of our tool ) but that doesn’t have to be how things work. A C/C++ like interpreter can make writing template code as easy as writing little bits of C code to manipulate shader string output and handle input/output ports, among many other useful utilities. I can assure you that having a fxebille, error reporting, template language is possible and fun to use. It’s like having VC++ built into our tool.A graph-based shader tool ( with static tweaks ) can allow a good shader permutation system to generate and manage far less shaders and make it easier to add/remove features which in a if/def style ubershader would get out of hand VERY quickly. Remember, a graph-based shader tool is inherently a dynamic uber-shader generator.Any shader tool ( like FX Composer ) because of its framework easily improves on iteration time and a well developed graph-based tool can even more drastically improve prototyping times AND lead to exiting new shader developments. For example, procedural effects are a bitch to envision, let alone develop blindly, doing so with a graph-based shader tool, it’s a snap and easy to visualize and debug.I’m not saying that this type of tool is for every studio, as it did take me 8 years to get to where I am with my tool, but I just wanted to say such a tool, if done right, would enhance a pipeline in MANY ways and lead to some amazing shader development / prototyping capabilities.

  23. Yes, yes I like.

  24. Sounds all well. I think less hassle to write shader code alone is worth the change! Faster compilation and loading is also very nice!

    But one question: will it fix problems like this one? http://forum.unity3d.com/threads/shaders-from-assetbundle-are-broken-on-macos.342165/