Search Unity

Ever wonder if it’s possible to make Unity follow a precise frame rate and potentially even follow an external clock source (commonly known as genlock)? This post will discuss how Unity natively maintains frame rates and how user code can be added to tightly control it. This can be vital in an environment like a broadcast studio where synchronization between Unity and other equipment is crucial.

Normally, out of the box, a Unity project will attempt to run your project as fast as possible. Frames will be rendered as quickly as they can while generally being limited by your display device’s refresh rate (see V Sync Count). The simplest way to start controlling frame rate is to explicitly set the QualitySettings.vSyncCount so that rendering will occur at an interval related to the display device’s refresh rate (e.g., for a 60Hz display, setting vSyncCount=2 will cause Unity to render at 30fps in sync with the display). This may not give granular enough control, however, as you are limited to submultiples of the display refresh rate.

The next simplest solution would be to set QualitySettings.vSyncCount=0 and use Application.targetFrameRate to target a frame rate independent of the display’s refresh rate. With this set, Unity will throttle back its rendering loop to approximately this rate (note that tearing may occur since Unity will no longer be rendering in sync with the display). This is done in a low-cost manner so as not to unnecessarily burn CPU resources. The downside is that this approach may not yield the required precision for every use case.

Fear not, coroutines can help improve precision. Rather than rely on Unity’s built-in frame rate throttling, you can control it yourself via script code. In order to do so, you must let Unity try to run as fast as possible by setting QualitySettings.vSyncCount=0 and Application.targetFrameRate to a very high value, and then, using a WaitForEndOfFrame coroutine, slow it down to precisely the rate you are looking for by refusing to allow the next frame to start rendering until you say so. To do this precisely, we suggest you use a combination of Thread.Sleep to conservatively delay the Unity rendering loop without eating up CPU resources, and then for the last few milliseconds, spin the CPU while checking for exactly the right time to allow the next frame to start. If you are trying to coordinate Unity’s frame rate with an external clock (genlock) you should break out of this CPU spinning loop as soon as an external sync signal is received.

Finally, on a related topic, if trying to coordinate time with an external clock source, you may also want Unity’s internal game time to advance at the same pace (rather than follow the CPUs clock which may drift over time relative to the external source). This is can be achieved by setting Time.captureFramerate to the same rate as the external clock. For example, if your external genlock signal is operating at 60fps, setting captureFramerate to 60 instructs Unity to allow exactly 1/60th of a second of game time to elapse between frame renders regardless of exactly how much real time has passed. As of Unity 2019.2 beta, it is possible to set a floating point capture frame rate by setting Time.captureDeltaTime. For older Unity versions, if your external signal is not operating at an integer based rate (like say 59.94), you can vary captureFramerate at every frame rendering to achieve the desired average rate for the advancement of time.

A sample proof-of-concept Unity project exploring the above topic is available at our GitHub project page. Specifically, precise control of frame rate using the WaitForEndFrame coroutine is given in ForceRenderRate.cs. A more complex example which emulates an external genlock can be found in GenLockedRecorder.cs. Although Unity does not natively support any vendor-specific genock implementation, the latter is a good starting point for integration with a third-party SDK offering this feature.

Please note that all above techniques yield the most stable frame rates when part of a Unity standalone player build. They are functional nonetheless when in Play Mode inside the Unity Editor but you may experience momentary fluctuations from time to time.

16 replies on “Precise frame rates in Unity”

It’s very odd that, in an article about precise frame rate (and thus: timings), there’s no mention of THE class provided by C# to do exactly that: System.Diagnostics.Stopwatch.

Seriously, whenever precise timing measures are needed, Stopwatch is the way to go. And yes, it’s perfectly fine to use it in runtimes (sometimes I bump into people that believe that, since the namespace it’s called Diagnostics, then it’s not advisable to use it in production code: this is wrong).

Unity has a serious problem with the method used for calculating Time.deltaTime, which causes frame stuttering in many system/devices, no matter what you do. The solution in this article seems to provide a way to have a better timing, but at the cost of experiencing tearing (no vsync). Not working properly in the Editor is also a con.

This is the cause of the frame stuttering problems in Unity: an ancient issue that other engines have resolved somehow, but Unity has left behind so it’s now a major issue:

Summarized and explained:

Once the problem is understood, I’ve been able to get perfectly smooth 60 fps in Unity (both builds and Editor) with this:

– Set Time.captureFrameRate = 60
– Force VSync enabled externally (display’s control panel, or opening Unity in OpenGL mode), as Time.captureFrameRate disables VSync.

The above solution causes Time.deltaTime to be exactly 1/60 seconds each frame. VSync forced means each frame is displayed at its correct time. Result is a perfectly smooth motion in both Editor and builds.

It’s not a generic solution as it causes other issues (i.e. no frames are dropped ever, no matter the CPU/GPU load), but it demonstrates the nature of the problem. It won’t get magically resolved by itself unless Unity devs decide to really understand and tackle it directly. Time.deltaTime must be computed based on the frame display rate, not in the frame rendering rate as is now.

realtimeSinceStartup precision may be lacking as tied to the system : .
Moreover, it’s a float and thus the precision mantissa is 23 bits.
This means that after having run for ~2.5 hours, the precision will be under 1ms.
To achieve a smooth 60fps, you need to wait 16.67ms between each frame. With a 1ms precision, it’s not possible, so after ~1 hour you may already notice some small frame hiccup.

To get an accurate time counter, you need to maintain your own double timer, incremented by Time.unscaledDeltaTime at each update(). (you can also use but its behavior seems sketchy when needing relatively high precision, possibly only on multi-core CPUs ; StopWatch is another solution that I didn’t check in depth for such goal)

It’d be better if you Unity could provide realtimeSinceStartup as a double… :-)

No, GenLock is just a clock source to the GPU(s) usually via a sync card (e.g. NVidia Quadro SyncII). Causing the scanline across all output display(s) to occur at the same time at the clocked frequency. Unity needs to have a fully rasterized frame presented (for all outputs) before this clocked rate (VSync). Failure to do so will result in a dropped frame (very bad in broadcast/film).

1. Why does setting targetFrameRate may not yield the required precision?

2. What happens if you don’t “spin the CPU while checking for exactly the right time to allow the next frame to start” and sleep for as long as you need? (currentTime – t)

3. If you run that on iOS you will cause your game to run at 30fps instead of 50, right? That is, this is not meant to be used in iOS….. ?

Would be nice if you could set the physics fps or set it to work to the frame rate. Currently you have a timestep with default 50 fps (.02) and if you want it at 60fps you have to put in a craxy fraction 1/60. Sure you can interpolate physics for dispay but would it not be faster to have a core fps same as game tick or refresh rate?

The case for this is actually a lot weaker than most people think.

Physics, for instance, doesn’t actually do a whole lot at 60 Hz. By that, I mean there’s not much that naturally occurs or needs to be checked every 16.66ms. For racing games, it’s higher, typically in the hundreds of Hz to catch the rumble strips and fine detail when you’re moving ~100m/s. On the other hand, for first-person games, collisions between players, objects, props isn’t so rapid, so 50, 40 or even 25Hz can work. I’ve seen one even get away with 12.5 Hz physics timesteps, and nobody noticed.

As for 50 Hz physics on a 60 Hz display, the difference is between 20ms and 16.66ms, so about 3.4ms. So, assuming you could un-align things, one object could move through another about 3.4ms long than it should have under ideal conditions. Think of it as playing an online game with position extrapolation and a ping of 3.4, you would barely notice it.

The case for 60, as in the same as the framerate, could be made for things, yes, but if you’re basing everything on screen framerate (so a 144Hz monitor would use 144Hz physics), you already have Update to work with for these sort of things.

If the developer is lazy, then FixedUpdate can be used for non physics purposes, yeah. All it really does though is run before Update if it’s required. You can absolutely replicate it with your own timer in a couple of lines of code. It doesn’t run parallel or anything, it runs before Update 0 to n times per frame.

Having said that, if your game is physics based and gameplay or anims rely on physics being in place then sure – put the code in FixedUpdate.

Don’t do it if you’re not using physics or physics-dependent code though, you’re just introducing potential for things to go wrong if inexperienced. If experienced then… well you wouldn’t be using it for a purpose it’s not made for.

I’m not sure co-routines really are the best model for doing things like this either. They generate garbage and cause frame-hitches, however with the new GC collector should be reasonable. In a larger older project though the GC might stall, if it’s been running for a long time (5 mins or so?). But the information in the blog is more than enough for people to roll their own timings if they wish, thanks for the contribution.

Note that WaitForEndOfFrame() is quite unreliable in the Editor, as it also triggers on Game window redraws caused by the Editor. I tried to bug this in the past, but the answer was that this is the correct behaviour by design…
From the bug report (1077480): “This is the intended behavior. Every function like OnGUI(), WaitForEndOfFrame() is rendering-dependent code, so it gets called when the editor renders the game view, even if it’s paused.”

Interesting, the only reason I would want to do this is on mobile… however the problem with doing this, is that input handling especially with touch becomes really awful for interaction and responsiveness… Am hoping the new input system stuff is going to be looked into with a more official built in way of doing this better for battery life on mobile with Unity based projects that aren’t exactly games… because frankly doing multiplatform using any else is just rubbish I found.. C# cross platform is the sweet spot but without any decent cross platform GUI frameworks its real the problem, and well uGUI despite being rather lackluster on features is better than the hassle of anything else I found.

Comments are closed.