G-SYNC 101: Input Lag & Optimal Settings



Test Setup

High Speed CameraCasio Exilim EX-ZR200 w/1000 FPS 224x64px video capture
DisplayAcer Predator XB252Q 240Hz w/G-SYNC (1920×1080)
MouseRazer Deathadder Chroma modified w/external LED
.
Nvidia Driver381.78
Nvidia Control PanelDefault settings (“Prefer maximum performance” enabled)
.
OSWindows 10 Home 64-bit (Creators Update)
MotherboardASRock Z87 Extreme4
Power SupplyEVGA SuperNOVA 750 W G2
HeatsinkHyper 212 Evo w/2x Noctua NF-F12 fans
CPUi7-4770k @4.2GHz w/Hyper-Threading enabled (8 cores, unparked: 4 physical/4 virtual)
GPUEVGA GTX 1080 FTW GAMING ACX 3.0 w/8GB VRAM & 1975MHz Boost Clock
Sound CardCreative Sound Blaster Z (optical audio)
RAM16GB G.SKILL Sniper DDR3 @1866 MHz (dual-channel: 9-10-9-28, 2T)
SSD (OS)256GB Samsung 850 Pro
HDD (Games)5TB Western Digital Black 7200 RPM w/128 MB cache
.
Test Game #1Overwatch w/lowest settings, “Reduced Buffering” enabled
Test Game #2Counter-Strike: Global Offensive w/lowest settings, “Multicore Rendering” disabled

Introduction

The input lag testing method used in this article was pioneered by Blur Buster’s Mark Rejhon, and originally featured in his 2014 Preview of NVIDIA G-SYNC, Part #2 (Input Lag) article. It has become the standard among testers since, and is used by a variety of sources across the web.

Middle Screen vs. First On-screen Reaction

In my original input lag tests featured in this thread on the Blur Busters Forums, I measured middle screen (crosshair-level) reactions at a single refresh rate (144Hz), and found that both V-SYNC OFF and G-SYNC, at the same framerate within the refresh rate, delivered frames to the middle of the screen at virtually the same time. This still holds true.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

However, while middle screen measurements are a common and fully valid input lag testing method, they are limited in what they can reveal, and do not account for the first on-screen reaction, which can mask the subtle and not so subtle differences in frame delivery between V-SYNC OFF and various syncing solutions; a reason why I opted to capture the entire screen this time around.

Due to the differences between the two test methods, V-SYNC OFF results generated from first on-screen measurements, especially at lower refresh rates (for reasons that will later be explained), can appear to have up to twice the input lag reduction of middle screen readings:

Blur Buster's G-SYNC 101: Middle Screen vs. First On-screen Reaction Diagram

As the diagram shows, this is because the measurement of the first on-screen reaction is begun at the start of the frame scan, whereas the measurement of the middle screen reaction is begun at crosshair-level, where, with G-SYNC, the in-progress frame scan is already half completed, and with V-SYNC OFF, can be at various percentages of completion, depending on the given refresh rate/framerate offset.

When V-SYNC OFF is directly compared to FPS-limited G-SYNC at crosshair-level, even with V-SYNC OFF’s framerate at up to 3x times above the refresh rate, middle screen readings are virtually a wash (the results in this article included). But, as will be detailed further in, V-SYNC OFF can, for a lack of better term, “defeat” the scanout by beginning the next frame scan in the previous scanout.

With V-SYNC OFF at -2 FPS below the refresh rate, for instance (the scenario used to compare V-SYNC OFF directly against G-SYNC in this article), the tearline will continuously roll upward, which means, when measured by first on-screen reactions, its advantage over G-SYNC can be anywhere from 0 to 1/2 frame, depending on the ever-fluctuating position of the tearline between samples. With middle screen readings, the initial position of the tearline(s), and thus, its advantage, is effectively ignored.

These differences should be kept in mind when inspecting the upcoming results, with the method featured in this article being the best case scenario for V-SYNC OFF, and the worst case scenario for synced when directly compared to V-SYNC OFF, G-SYNC included.

Test Methodology

To further facilitate the first on-screen reaction method, I’ve changed sample capture from muzzle flash to strafe for Overwatch (credit goes to Battle(non)sense for the initial suggestion) and look for CS:GO, which triggers horizontal updates across the entire screen. The strafe/look mechanics are also more consistent from click to click, and less prone to the built-in variable delay experienced from shot to shot with the previous method.

To ensure a proper control environment for testing, and rule out as many variables as possible, the Nvidia Control Panel settings (but for “Power management mode” set to “Prefer maximum performance”) were left at defaults, all background programs were closed, and all overlays were disabled, as was the Creators Update’s newly introduced “Game Mode,” and .exe Compatibility option “fullscreen optimizations,” along with the existing “Game bar” and “Game DVR” options.

To guarantee extraneous mouse movements didn’t throw off input reads during rapid clicks, masking tape was placed over the sensor of the modified test mouse (Deathadder Chroma), and a second mouse (Deathadder Elite) was used to navigate the game menus and get into place for sample capture.

To emulate lower maximum refresh rates on the native 240Hz Acer Predator XB252Q, “Preferred refresh rate” was set to “Application-controlled” when G-SYNC was enabled, and the refresh rate was manually adjusted as needed in the game options (Overwatch), or on the desktop (CS:GO) before launch.

And, finally, to validate and track the refresh rate, framerate, and the syncing solution in use for each scenario, the in-game FPS counter, Nvidia Control Panel’s G-SYNC Indicator, and the display’s built-in refresh rate meter were active at all times.

Testing was performed with a Casio Exilim EX-ZR200 capable of 1000 FPS high speed video capture (accurate within 1ms), and a Razer Deathadder Chroma modified with an external LED (credit goes to Chief Blur Buster for the mod), which lights up on left click, and has a reactive variance of <1ms.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

To compensate for the camera’s low 224×64 pixel video resolution, a bright image with stark contrast between foreground and background, and thin vertical elements that could easily betray horizontal movement across the screen, were needed for reliable discernment of first reactions after click.

For Overwatch, Genji was used due to his smaller viewmodel and ability to scale walls, and an optimal spot on the game’s Practice Range was found that met the aforementioned criteria. Left click was mapped to strafe left, in-game settings were at the lowest available, and “Reduced Buffering” was enabled to ensure the lowest input latency possible.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

For CS:GO, a custom map provided by the Blur Busters Forum’s lexlazootin was used, which strips all unnecessary elements (time limits, objectives, assets, viewmodel, etc), and contains a lone white square suspended in a black void, that when positioned just right, allows the slightest reactions to be accurately spotted via the singular vertical black and white separation. Left click was mapped to look left, in-game settings were at the lowest available, and “Multicore Rendering” was disabled to ensure the lowest input latency possible.

For capture, the Acer Predator XB252Q (LED fixed to its left side) was recorded as the mouse was clicked a total of ten times. To average out differences between runs, this process was repeated four times per scenario, and each game was restarted after each run.

Once all scenarios were recorded, the .mov format videos, containing ten samples each, were inspected in QuickTime using its built-in frame counter and single frame stepping function via the arrows keys. The video was jogged through until the LED lit up, at which point the frame number was input into an Excel spreadsheet. Frames (thanks to 1000 FPS video capture, represent a literal 1ms each) were then stepped through until the first reaction was spotted on-screen, where, again, the frame number was input into the spreadsheet. This generated the total delay in milliseconds from left click to first on-screen reaction, and the process was repeated per video, ad nauseam.

All told, 508 videos weighing in at 17.5GB, with an aggregated (albeit slow-motion) 45 hour and 40 minute runtime, were recorded across 2 games and 6 refresh rates, containing a total of 42 scenarios, 508 runs, and 5080 individual samples. My original Excel spreadsheet is available for download here, and can also be viewed online here.

Blur Buster's G-SYNC 101: Input Latency & Optimal Settings

To preface, the following results and explanations assume that the native resolution w/default timings are in use on a single monitor in exclusive fullscreen mode, paired with a single-GPU desktop system that can sustain the framerate above the refresh rate at all times.

This article does not seek to measure the impact of input lag differences incurred by display, input device, CPU or GPU overclocks, RAM timings, disk drives, drivers, BIOS, OS, or in-game graphical settings. And the baseline numbers represented in the results are not indicative of, and should not be expected to be replicable on other systems, which will vary in configuration, specs, and the games being run.

This article seeks only to measuring the impact V-SYNC OFF, G-SYNC, V-SYNC, and Fast Sync, paired with various framerate limiters, have on frame delivery and input lag, and the differences between them; the results of which are replicable across setups.

+/- 1ms differences between identical scenarios in the following charts are usually within margin of error, while +/- 1ms differences between separate scenarios are usually measurable, and the error margin may not apply. And finally, all mentions of “V-SYNC (NVCP)” in the denoted scenarios signify that the Nvidia Control Panel’s “Vertical sync” entry was set to “On,” and “V-SYNC OFF” or “G-SYNC + V-SYNC ‘Off'” signify that “Use the 3D application setting” was applied w/V-SYNC disabled in-game.

So, without further ado, onto the results…

Input Lag: Not All Frames Are Created Equal

When it is said that there is “1 frame” or “2 frames” of delay, what does that actually mean? In this context, a “frame” signifies the total time a rendered frame takes to be displayed completely on-screen. The worth of a single frame is dependent on the display’s maximum native refresh rate. At 60Hz, a frame is worth 16.6ms, at 100Hz: 10ms, 120Hz: 8.3ms, 144Hz: 6.9ms, 200Hz: 5ms, and 240Hz: 4.2ms, continuing to decrease in worth as the refresh rate increases.

With double buffer V-SYNC, there is typically a 2 frame delay when the framerate exceeds the refresh rate, but this isn’t always the case. Overwatch, even with “Reduced Buffering” enabled, can have up to 4 frames of delay with double buffer V-SYNC engaged.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

The chart above depicts anywhere from 3 to 3 1/2 frames of added delay. At 60Hz, this is significant, at up to 58.1ms of additional input lag. At 240Hz, where a single frame is worth far less (4.2ms), a 3 1/2 frame delay is comparatively insignificant, at up to 14.7ms.

In other words, a “frame” of delay is relative to the refresh rate, and dictates how much or how little of a delay is incurred per, a constant which should be kept in mind going forward.

G-SYNC Ceiling vs. V-SYNC: Identical or Fraternal?

As described in G-SYNC 101: Range, G-SYNC doesn’t actually become double buffer V-SYNC above its range (nor does V-SYNC take over), but instead, G-SYNC mimics V-SYNC behavior when it can no longer adjust the refresh rate to the framerate. So, when G-SYNC hits or exceeds its ceiling, how close is it to behaving like standalone V-SYNC?

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

Pretty close. However, the G-SYNC numbers do show a reduction, mainly in the minimum and averages across refresh rates. Why? It boils down to how G-SYNC and V-SYNC behavior differ whenever the framerate falls (even for a moment) below the maximum refresh rate. With double buffer V-SYNC, a fixed frame delivery window is missed and the framerate is locked to half the refresh rate by a repeated frame, maintaining extra latency, whereas G-SYNC adjusts the refresh rate to the framerate in the same instance, eliminating latency.

As for “triple buffer” V-SYNC, while the subject won’t be delved into here due to the fact that G-SYNC is based on a double buffer, the name actually encompasses two entirely separate methods; the first should be considered “alt” triple buffer V-SYNC, and is the method featured in the majority of modern games. Unlike double buffer V-SYNC, it prevents the lock to half the refresh rate when the framerate falls below it, but in turn, adds 1 frame of delay over double buffer V-SYNC when the framerate exceeds the refresh rate; if double buffer adds 2-6 frames of delay, for instance, this method would add 3-7 frames.

“True” triple buffer V-SYNC, like “alt,” prevents the lock to half the refresh rate, but unlike “alt,” can actually reduce V-SYNC latency when the framerate exceeds the refresh rate. This “true” method is rarely used, and its availability, in part, can depend on the game engine’s API (OpenGL, DirectX, etc). A form of this method is implemented by the DWM (Desktop Window Manager) for borderless and windowed mode, and by Fast Sync, both of which will be explained in more detail further on.

Suffice to say, even at its worst, G-SYNC beats V-SYNC.

G-SYNC Ceiling vs. FPS Limit: How Low Should You Go?

Blur Busters was the world’s first site to test G-SYNC in Preview of NVIDIA G-SYNC, Part #1 (Fluidity) using an ASUS VG248QE pre-installed with a G-SYNC upgrade kit. At the time, the consensus was limiting the fps from 135 to 138 at 144Hz was enough to avoid V-SYNC-level input lag.

However, much has changed since the first G-SYNC upgrade kit was released; the Minimum Refresh Range wasn’t in place, the V-SYNC toggle had yet to be exposed, G-SYNC did not support borderless or windowed mode, and there was even a small performance penalty on the Kepler architecture at the time (Maxwell and later corrected this).

My own testing in my Blur Busters Forum thread found that just 2 FPS below the refresh rate was enough to avoid the G-SYNC ceiling. However, now armed with improved testing methods and equipment, is this still the case, and does the required FPS limit change depending on the refresh rate?

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

As the results show, just 2 FPS below the refresh rate is indeed still enough to avoid the G-SYNC ceiling and prevent V-SYNC-level input lag, and this number does not change, regardless of the maximum refresh rate in use.

To leave no stone unturned, an “at” FPS, -1 FPS, -2 FPS, and finally -10 FPS limit was tested to prove that even far below -2 FPS, no real improvements can be had. In fact, limiting the FPS lower than needed can actually slightly increase input lag, especially at lower refresh rates, since frametimes quickly become higher, and thus frame delivery becomes slower due to the decrease in sustained framerates.

As for the “perfect” number, going by the results, and taking into consideration variances in accuracy from FPS limiter to FPS limiter, along with differences in performance from system to system, a -3 FPS limit is the safest bet, and is my new recommendation. A lower FPS limit, at least for the purpose of avoiding the G-SYNC ceiling, will simply rob frames.

G-SYNC vs. V-SYNC OFF: At the Mercy of the Scanout

Now that the FPS limit required for G-SYNC to avoid V-SYNC-level input lag has been established, how does G-SYNC + V-SYNC and G-SYNC + V-SYNC “Off” compare to V-SYNC OFF at the same framerate?

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

The results show a consistent difference between the three methods across most refresh rates (240Hz is nearly equalized in any scenario), with V-SYNC OFF (G-SYNC + V-SYNC “Off,” to a lesser degree) appearing to have a slight edge over G-SYNC + V-SYNC. Why? The answer is tearing…

With any vertical synchronization method, the delivery speed of a single, tear-free frame (barring unrelated frame delay caused by many other factors) is ultimately limited by the scanout. As mentioned in G-SYNC 101: Range, The “scanout” is the total time it takes a single frame to be physically drawn, pixel by pixel, left to right, top to bottom on-screen.

With a fixed refresh rate display, both the refresh rate and scanout remain fixed at their maximum, regardless of framerate. With G-SYNC, the refresh rate is matched to the framerate, and while the scanout speed remains fixed, the refresh rate controls how many times the scanout is repeated per second (60 times at 60 FPS/60Hz, 45 times at 45 fps/45Hz, etc), along with the duration of the vertical blanking interval (the span between the previous and next frame scan), where G-SYNC calculates and performs all overdrive and synchronization adjustments from frame to frame.

The scanout speed itself, both on a fixed refresh rate and variable refresh rate display, is dictated by the current maximum refresh rate of the display:

Blur Buster's G-SYNC 101: Scanout Speed DiagramAs the diagram shows, the higher the refresh rate of the display, the faster the scanout speed becomes. This also explains why V-SYNC OFF’s input lag advantage, especially at the same framerate as G-SYNC, is reduced as the refresh rate increases; single frame delivery becomes faster, and V-SYNC OFF has less of an opportunity to defeat the scanout.

V-SYNC OFF can defeat the scanout by starting the scan of the next frame(s) within the previous frame’s scanout anywhere on screen, and at any given time:

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

This results in simultaneous delivery of more than one frame scan in a single scanout (tearing), but also a reduction in input lag; the amount of which is dictated by the positioning and number of tearline(s), which is further dictated by the refresh rate/sustained framerate ratio (more on this later).

As noted in G-SYNC 101: Range, G-SYNC + VSYNC “Off” (a.k.a. Adaptive G-SYNC) can have a slight input lag reduction over G-SYNC + V-SYNC as well, since it will opt for tearing instead of aligning the next frame scan to the next scanout when sudden frametime variances occur.

To eliminate tearing, G-SYNC + VSYNC is limited to completing a single frame scan per scanout, and it must follow the scanout from top to bottom, without exception. On paper, this can give the impression that G-SYNC + V-SYNC has an increase in latency over the other two methods. However, the delivery of a single, complete frame with G-SYNC + V-SYNC is actually the lowest possible, or neutral speed, and the advantage seen with V-SYNC OFF is the negative reduction in delivery speed, due to its ability to defeat the scanout.

Bottom-line, within its range, G-SYNC + V-SYNC delivers single, tear-free frames to the display the fastest the scanout allows; any faster, and tearing would be introduced.

G-SYNC vs. V-SYNC w/FPS Limit: So Close, Yet So Far Apart

On the subject of single, tear-free frame delivery, how does standalone double buffer V-SYNC compare to G-SYNC with the same framerate limit?

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

As the results show, but for 60Hz (remember, a “frame” of delay is relative to the refresh rate), the numbers are relatively close. So what’s so great about G-SYNC’s ability to adjust the refresh rate to the framerate, if the majority of added input latency with V-SYNC can be eliminated with a simple FPS limit? Well, as the title of this section hints, it’s not quite that cut and dry…

While it’s common knowledge that limiting the FPS below the refresh rate with V-SYNC prevents the over-queuing of frames, and thus majority of added input latency, it isn’t without its downsides.

Unlike G-SYNC, V-SYNC must attempt to time frame delivery to the fixed refresh rate of the display. If it misses a single one of these delivery windows below the maximum refresh rate, the current frame must repeat once until the next frame can be displayed, locking the framerate to half the refresh rate, causing stutter. If the framerate exceeds the maximum refresh rate, the display can’t keep up with frame output, as rendered frames over-queue in both buffers, and appearance of frames is delayed yet again, which is why an FPS limit is needed to prevent this in the first place.

When an FPS limit is set with V-SYNC, the times it can deliver frames per second is shrunk. If, for instance, the FPS limiter is set to 59 fps on a 60Hz display, instead of 60 frames being delivered per second, only 59 will be delivered, which means roughly every second a frame will repeat.

As the numbers show, while G-SYNC and V-SYNC averages are close over a period of frames, evident by the maximums, it eventually adds up, causing 1/2 to 1 frame of accumulative delay, as well as recurring stutter due to repeated frames. This is why it is recommended to set a V-SYNC FPS limit mere decimals below the refresh rate via external programs such as RTSS.

That said, an FPS limit is superior to no FPS limit with double buffer V-SYNC, so long as the framerate can be sustained above the refresh rate at all times. However, G-SYNC’s ability to adjust the refresh rate to the framerate eliminates this issue entirely, and, yet again, beats V-SYNC hands down.

G-SYNC vs. Fast Sync: The Limits of Single Frame Delivery

Okay, so what about Fast Sync? Unlike G-SYNC, it works with any display, and while it’s still a fixed refresh rate syncing solution, its third buffer allows the framerate to exceed the refresh rate, and it utilizes the excess frames to deliver them to the display as fast as possible. This avoids double buffer behavior both above and below the refresh rate, and eliminates the majority of V-SYNC input latency.

Sounds ideal, but how does it compare to G-SYNC?

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

Evident by the results, Fast Sync only begins to reduce input lag over FPS-limited double buffer V-SYNC when the framerate far exceeds the display’s refresh rate. Like G-SYNC and V-SYNC, it is limited to completing a single frame scan per scanout to prevent tearing, and as the 60Hz scenarios show, 300 FPS Fast Sync at 60Hz (5x ratio) is as low latency as G-SYNC is with a 58 FPS limit at 60Hz.

However, the less excess frames are available for the third buffer to sample from, the more the latency levels of Fast Sync begin to resemble double buffer V-SYNC with an FPS Limit. And if the third buffer is completely starved, as evident in the Fast Sync + FPS limit scenarios, it effectively reverts to FPS-limited V-SYNC latency, with an additional 1/2 to 1 frame of delay.

Unlike double buffer V-SYNC, however, Fast Sync won’t lock the framerate to half the maximum refresh rate if it falls below it, but like double buffer V-SYNC, Fast Sync will periodically repeat frames if the FPS is limited below the refresh rate, causing stutter. As such, an FPS limit below the refresh rate should be avoided when possible, and Fast Sync is best used when the framerate can exceed the refresh rate by at least 2x, 3x, or ideally, 5x times.

So, what about pairing Fast Sync with G-SYNC? Even Nvidia suggests it can be done, but doesn’t go so far as to recommend it. But while it can be paired, it shouldn’t be…

Say the system can maintain an average framerate just above the maximum refresh rate, and instead of an FPS limit being applied to avoid V-SYNC-level input lag, Fast Sync is enabled on top of G-SYNC. In this scenario, G-SYNC is disabled 99% of the time, and Fast Sync, with very few excess frames to work with, not only has more input lag than G-SYNC would at a lower framerate, but it can also introduce uneven frame pacing (due to dropped frames), causing recurring microstutter. Further, even if the framerate could be sustained 5x above the refresh rate, Fast Sync would (at best) only match G-SYNC latency levels, and the uneven frame pacing (while reduced) would still occur.

That’s not to say there aren’t any benefits to Fast Sync over V-SYNC on a standard display (60Hz at 300 FPS, for instance), but pairing Fast Sync with uncapped G-SYNC is effectively a waste of a G-SYNC monitor, and an appropriate FPS limit should always be opted for instead.

Which poses the next question: if uncapped G-SYNC shouldn’t be used with Fast Sync, is there any benefit to using G-SYNC + Fast Sync + FPS limit over G-SYNC + V-SYNC (NVCP) + FPS limit?

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

The answer is no. In fact, unlike G-SYNC + V-SYNC, Fast Sync remains active near the maximum refresh rate, even inside the G-SYNC range, reserving more frames for itself the higher the native refresh rate is. At 60Hz, it limits the framerate to 59, at 100Hz: 97 FPS, 120Hz: 116 FPS, 144Hz: 138 FPS, 200Hz: 189 FPS, and 240Hz: 224 FPS. This effectively means with G-SYNC + Fast Sync, Fast Sync remains active until it is limited at or below the aforementioned framerates, otherwise, it introduces up to a frame of delay, and causes recurring microstutter. And while G-SYNC + Fast Sync does appear to behave identically to G-SYNC + V-SYNC inside the Minimum Refresh Range (<36 FPS), it’s safe to say that, under regular usage, G-SYNC should not be paired with Fast Sync.

V-SYNC OFF: Beyond the Limits of the Scanout

It’s already been established that single, tear-free frame delivery is limited by the scanout, and V-SYNC OFF can defeat it by allowing more than one frame scan per scanout. That said, how much of an input lag advantage can be had over G-SYNC, and how high must the framerate be sustained above the refresh rate to diminish tearing artifacts and justify the difference?

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

Quite high. Counting first on-screen reactions, V-SYNC OFF already has a slight input lag advantage (up to a 1/2 frame) over G-SYNC at the same framerate, especially the lower the refresh rate, but it actually takes a considerable increase in framerate above the given refresh rate to widen the gap to significant levels. And while the reductions may look significant in bar chart form, even with framerates in excess of 3x the refresh rate, and when measured at middle screen (crosshair-level) only, V-SYNC OFF actually has a limited advantage over G-SYNC in practice, and most of it is in areas that one could argue, for the average player, are comparatively useless when something such as a viewmodel’s wrist is updated 1-3ms faster with V-SYNC OFF.

This is where the refresh rate/sustained framerate ratio factors in:

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

As shown in the above diagrams, the true advantage comes when V-SYNC OFF can allow not just two, but multiple frame scans in a single scanout. Unlike syncing solutions, with V-SYNC OFF, the frametime is not paced to the scanout, and a frame will begin scanning in as soon as it’s rendered, regardless whether the previous frame scan is still in progress. At 144Hz with 1000 FPS, for instance, this means with a sustained frametime of 1ms, the display updates nearly 7 times in a single scanout.

In fact, at 240Hz, first on-screen reactions became so fast at 1000 FPS and 0 FPS, that the inherit delay in my mouse and display became the bottleneck for minimum measurements.

So, for competitive players, V-SYNC OFF still reigns supreme in the input lag realm, especially if sustained framerates can exceed the refresh rate by 5x or more. However, while at higher refresh rates, visible tearing artifacts are all but eliminated at these ratios, it can instead manifest as microstutter, and thus, even at its best, V-SYNC OFF still can’t match the consistency of G-SYNC frame delivery.

In-game vs. External FPS Limiters: Closer to the Source

Up until this point, an in-game framerate limiter has been used exclusively to test FPS-limited scenarios. However, in-game framerate limiters aren’t available in every game, and while they aren’t required for games where the framerate can’t meet or exceed the maximum refresh rate, if the system can sustain the framerate above the refresh rate, and a said option isn’t present, an external framerate limiter must be used to prevent V-SYNC-level input lag instead.

In-game framerate limiters, being at the game’s engine-level, are almost always free of additional latency, as they can regulate frames at the source. External framerate limiters, on the other hand, must intercept frames further down the rendering chain, which can result in delayed frame delivery and additional input latency; how much depends on the limiter and its implementation.

RTSS is a CPU-level FPS limiter, which is the closest an external method can get to the engine-level of an in-game limiter. In my initial input lag tests on my original thread, RTSS appeared to introduce no additional delay when used with G-SYNC. However, it was later discovered disabling CS:GO’s “Multicore Rendering” setting, which runs the game on a single CPU-core, caused the discrepancy, and once enabled, RTSS introduced the expected 1 frame of delay.

Seeing as the CS:GO still uses DX9, and is a native single-core performer, I opted to test the more modern “Overwatch” this time around, which uses DX11, and features native multi-threaded/multi-core support. Will RTSS behave the same way in a native multi-core game?

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

Yes, RTSS still introduces up to 1 frame of delay, regardless of the syncing method, or lack thereof, used. To prove that a -2 FPS limit was enough to avoid the G-SYNC ceiling, a -10 FPS limit was tested with no improvement. The V-SYNC scenario also shows RTSS delay stacks with other types of delay, retaining the FPS-limited V-SYNC’s 1/2 to 1 frame of accumulative delay.

Next up is Nvidia’s FPS limiter, which can be accessed via the third-party “Nvidia Inspector.” Unlike RTSS, it is a driver-level limiter, one further step removed from engine-level. My original tests showed the Nvidia limiter introduced 2 frames of delay across V-SYNC OFF, V-SYNC, and G-SYNC scenarios.

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

Yet again, the results for V-SYNC and V-SYNC OFF (“Use the 3D application setting” + in-game V-SYNC disabled) show standard, out-of-the-box usage of both Nvidia’s v1 and v2 FPS limiter introduce the expected 2 frames of delay. The limiter’s impact on G-SYNC appears to be particularly unforgiving, with a 2 to 3 1/2 frame delay due to an increase in maximums at -2 FPS compared to -10 FPS, meaning -2 FPS with this limiter may not be enough to keep it below the G-SYNC ceiling at all times, and it might be worsened by the Nvidia limiter’s own frame pacing behavior’s effect on G-SYNC functionality.

Needless to say, even if an in-game framerate limiter isn’t available, RTSS only introduces up to 1 frame of delay, which is still preferable to the 2+ frame delay added by Nvidia’s limiter with G-SYNC enabled, and a far superior alternative to the 2-6 frame delay added by uncapped G-SYNC.

G-SYNC Fullscreen vs. Borderless/Windowed: DWM Woes?

Requested by swarna in the Blur Busters Forums, is a scenario that investigates the effects of the DWM (Desktop Windows Manager, “Aero” in Windows 7) on G-SYNC in borderless and windowed mode.

Unlike exclusive fullscreen, which bypasses the DWM composition entirely, borderless and windowed mode rely on the DWM, which, due to its framebuffer, adds 1 frame of delay. The DWM can’t be disabled in Windows 10, and uses it’s own form of triple buffer V-SYNC (very similar to Fast Sync) that overrides all standard syncing solutions when borderless or windowed mode are in use.

To make sure this was the case, all combinations of NVCP and in-game V-SYNC, as well as the Windows 10 “Game Mode” and “fullscreen optimization” settings were tested to see if DWM could be disabled, and tearing could be introduced; it could not be, so Game Mode and fullscreen optimizations were disabled once again, and NVCP V-SYNC was re-enabled across scenarios for consistency’s sake.

The question is, does DWM add 1 frame of delay with G-SYNC using borderless and windowed mode?

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

Overwatch, shows that, no, with G-SYNC enabled, both borderless and windowed mode do not add 1 frame of delay over exclusive fullscreen. Standalone “V-SYNC,” however, does show the expected 1 frame of delay. CS:GO was also tested for corroboration, and ought to have the same results, as DWM behavior is at the OS-level and should remain unchanged, regardless of the game…

blur-busters-g-sync-101-input-latency-optimal-settings
blur-busters-g-sync-101-input-latency-optimal-settings

Sure enough, again, G-SYNC sees no added delay, and V-SYNC sees the expected 1 frame of delay.

Further testing may be required, but it appears on the latest public build of Windows 10 with out-of-the-box settings (with or without “Game Mode”), G-SYNC somehow bypasses the 1 frame of delay added by the DWM. That said, I still don’t suggest borderless or windowed mode over exclusive fullscreen due to the 3-5% decrease in performance, but if these findings are true across configurations, it great news for games that only offer a borderless windowed option, or for multitaskers with secondary monitors.

Bonus Points: Hidden Benefits of High Refresh Rate G-SYNC

Often overlooked is G-SYNC’s ability to adjust the refresh rate to lower fixed framerates. This can be particularly useful for games hard-locked to 60 FPS, and has potential in emulators to replicate unique signals such as the 60.1Hz of NES games, which would otherwise be impossible to reproduce. And due to the scanout speed increase at 100Hz+ refresh rates, an input lag reduction can be had as well…

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

The results show a considerable input lag reduction on a 144Hz G-SYNC display @60 FPS vs. a 60Hz G-SYNC display @58 FPS with first on-screen reactions measured (middle screen would show about half this reduction). And while each frame is still rendered in 16.6ms, and delivered in intervals of 60 per second on the higher refresh rate display, they are scanned in at a much faster 6.9ms per.

Optimal G-SYNC Settings*

*Settings tested with a single G-SYNC display on a single desktop GPU system; specific DSR, SLI, and multi-monitor behaviors, as well as laptop G-SYNC implementation, may vary.

Nvidia Control Panel Settings:

  • Set up G-SYNC > Enable G-SYNC > Enable G-SYNC for full screen mode.
  • Manage 3D settings > Vertical sync > On.

In-game Settings:

  • Use “Fullscreen” or “Exclusive Fullscreen” mode (some games do not offer this option, or label borderless windowed as fullscreen).
  • Disable all available “Vertical Sync,” “V-SYNC” and “Triple Buffering” options.
  • If an in-game or config file FPS limiter is available, and framerate exceeds refresh rate:
    Set 3 FPS limit below display’s maximum refresh rate (57 FPS @60Hz, 97 FPS @100Hz, 117 FPS @120Hz, 141 FPS @144Hz, etc).

RTSS Settings:

  • If an in-game or config file FPS limiter is not available and framerate exceeds refresh rate:
    Set 3 FPS limit below display’s maximum refresh rate (see G-SYNC 101: External FPS Limiters HOWTO).

Windows “Power Options” Settings:

Windows-managed core parking can put CPU cores to sleep too often, which may increase frametime variances and spikes. For a quick fix, use the “High performance” power plan, which disables OS-managed core parking and CPU frequency scaling. If a “Balanced” power plan is needed for a system implementing adaptive core frequency and voltage settings, then a free program called ParkControl by Bitsum can be used to disable core parking, while leaving all other power saving and scaling settings intact.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

Mouse Settings:

If available, set the mouse’s polling rate to 1000Hz, which is the setting recommended by Nvidia for high refresh rate G-SYNC, and will decrease the mouse-induced input lag and microstutter experienced with the lower 500Hz and 125Hz settings at higher refresh rates.

mouse-125vs500vs1000

Refer to The Blur Busters Mouse Guide for complete information.

Nvidia Control Panel V-SYNC vs. In-game V-SYNC

While NVCP V-SYNC has no input lag reduction over in-game V-SYNC, and when used with G-SYNC + FPS limit, it will never engage, some in-game V-SYNC solutions may introduce their own frame buffer or frame pacing behaviors, enable triple buffer V-SYNC automatically (not optimal for the native double buffer of G-SYNC), or simply not function at all, and, thus, NVCP V-SYNC is the safest bet.

There are rare occasions, however, where V-SYNC will only function with the in-game option enabled, so if tearing or other anomalous behavior is observed with NVCP V-SYNC (or visa-versa), each solution should be tried until said behavior is resolved.

Maximum Pre-rendered Frames: Depends

A somewhat contentious setting with very elusive consistent documentable effects, Nvidia Control Panel’s “Maximum pre-rendered frames” dictates how many frames the CPU can prepare before they are sent to the GPU. At best, setting it to the lowest available value of “1” can reduce input lag by 1 frame (and only in certain scenarios), at worst, depending on the power and configuration of the system, the CPU may not be able to keep up, and more frametime spikes will occur.

The effects of this setting are entirely dependent on the given system and game, and many games already have an equivalent internal value of “1” at default. As such, any input latency tests I could have attempted would have only applied to my system, and only to the test game, which is why I ultimately decided to forgo them. All that I can recommend is to try a value of “1” per game, and if the performance doesn’t appear to be impacted and frametime spikes do not increase in frequency, then either, one, the game already has an internal value of “1,” or, two, the setting has done its job and input lag has decreased; user experimentation is required.

Conclusion

Much like strobing methods such as LightBoost & ULMB permit “1000Hz-like” motion clarity at attainable framerates in the here and now, G-SYNC provides input response that rivals high framerate V-SYNC OFF, with no tearing, and at any framerate within its range.

As for its shortcomings, G-SYNC is only as effective as the system it runs on. If the road is the system, G-SYNC is the suspension; the bumpier the road, the less it can compensate. But if set up properly, and run on a capable system, G-SYNC is the best, most flexible syncing solution available on Nvidia hardware, with no peer (V-SYNC OFF among them) in the sheer consistency of its frame delivery.

Continue to the next page for “External FPS Limiters HOWTO,” and feel free to leave comment below, or resume the discussion in the Blur Busters Forums.


20 comments on “G-SYNC 101: Input Lag & Optimal Settings

  1. MichaelJamesJohnson says:

    Optimal G-Sync Settings

    Can you add a section for OS settings?

    Did I understand correctly that you recommend turning off full screen optimizations and game mode?

    • jorimt says:

      @MichaelJamesJohnson

      I don’t have any specific OS settings to recommend; what exists in the “Optimal G-SYNC Settings” is it.

      As for the Windows 10 Creators update’s “Game Mode,” and “fullscreen optimizations,” they were disabled for this specific testing to ensure a proper control environment.

      For the record, I don’t disapprove of the “Game Mode,” and “fullscreen optimization” settings, and you can use them in whatever configuration you wish. They are simply too new, and too little is known about their effects to trust, so I disabled them to be safe.

      I did, however, do some brief off-the-record tests of the Game Mode, Game bar, and fullscreen optimization settings, and posted my findings here:
      http://forums.blurbusters.com/viewtopic.php?f=5&t=3073&start=260#p26491

  2. Perdition says:

    So in Overwatch, optimal settings for the least amount of input lag would be:

    In-game frame target of 3 less than max refresh rate (in my case 162fps)
    G-sync On in NVCP in Full Screen Mode
    V-sync set to “Use 3D Application” and disabled in Overwatch Menu

    Correct?

    • Perdition says:

      I reread the article and it appears I should have V sync On in NVCP and not on in game. However, I don’t quite fully understand the reasoning behind it.

      • jorimt says:

        Those settings are correct but for “Vertical sync,” which you want set to “On” in the Nvidia Control Panel (not “Use the 3D application setting”; you will get partial tearing with this, even with G-SYNC enabled), and V-SYNC disabled in-game (this goes for any game).

        As for NVCP V-SYNC vs. in-game V-SYNC, read my “Nvidia Control Panel V-SYNC vs. In-game V-SYNC” just two sections above “Conclusion” in this article. It explains the reasoning for that aspect pretty clearly.

        If you need further clarification, I will attempt to offer it, but my article is already very clear on why you need G-SYNC + V-SYNC to avoid tearing; I would simply be repeating myself in any follow-up answer on this.

        Be sure to read the G-SYNC 101: Range entry of this article for more details on G-SYNC + V-SYNC “Off” vs. G-SYNC + V-SYNC “On,” if you haven’t already. Refer to the Range chart there as well.

  3. Zanthra says:

    What interests me about Fast Sync is how it looks to the game. In general, when a video game is running it’s render loop, V-Sync (with or without G-Sync) will synchronously or asynchronously block the rendering of the next frame from starting until after the GPU discards the front buffer. Limiting the framerate with an in-game setting does something very similar, just using an internal clock between renders instead of using the GPU’s clock.

    The thing about Fast Sync is that it presents itself as a non-blocking rendering pipeline, just as V-Sync off does. The GPU immediately gives it a free buffer to render to, allowing it to incorporate the latest data into that frame. If the game renders multiple frames in between vertical refreshes, the first ones are discarded to make room for the next frame to be rendered, and the GPU takes the latest complete frame for display on the next refresh.

    Because of this, I think that Fast Sync can be seen as an alternative in many cases to limiting the FPS, especially on older games or low refresh rate displays where the game can get to multiples of the refresh rate, or games that don’t have an in-game FPS limit (since you can avoid the added latency of RTSS).

    Ideally, there would be a extra low latency FPS Limit mode designed into games to go along with adaptive refresh rate solutions, where they use a feedback system like a PID to insert the waiting period (if the framerate is above the limit) before collecting the data to render the frame, based on an estimate of how long it will take to render the next frame.

    The great benefit is that the adaptive refresh rate solves the problem of near misses if the estimated render time is off. If the frame is 1ms late, the monitor will simply wait for the frame to be complete and scan it out immediately when it is ready. If it’s 1ms early, the GPU will wait for the Vertical Sync. Either way, you get a minimum amount of time that any frame is finished rendering and not being displayed, where new input data cannot be integrated.

    • jorimt says:

      Yes, while it is moot point if you are using G-SYNC (in any configuration), Fast Sync does have its uses, and is effectively a global way to enable “true” triple buffer V-SYNC in almost any game. And as you say (regardless of the microstutter), it can suffice as an FPS limiter in place of an external limiter, reducing latency AND eliminate tearing all at once.

      That said, in-game limiters have thankfully made a return as of late (Frostbite and UE4 engines, Overwatch, and even games like Watch Dogs 2 and Dishonored 2, admittedly limited to presets; 60 FPS, 120 FPS, etc), but I’d like to see one in every major game myself, as it is invaluable for G-SYNC and FreeSync technology. And in-game limiters, much like Fast Sync, can also utilize lower frametimes to deliver frames faster.

      Eventually, I’d even like to see an external predictive FPS limiter with little to no latency, but we know that would be difficult to achieve.

      G-SYNC, FreeSync, Fast Sync, in-game limiters, external limiters; the more options the better I say.

  4. Halfwit says:

    Excellent article, really well done!

    I do have a question, though: if you just wanted to test the general input lag of the monitor, what settings would you use? Or, let’s break it down like this, as I’m equally interested to hear your opinion on all of these:

    1. Would you use a game to test it (CS:GO, for example) or would you test it in Windows desktop?

    2. Would you go for the crosshair method or the movement method?

    3. What framerate and refresh rate would you set your monitor to, assuming it’s a 144 Hz G-Sync monitor?

    Thanks in advance for your input!

    • jorimt says:

      Unfortunately, since the high speed camera/led method measures the entire button-to-pixel input lag chain at once, it’s difficult to isolate the display lag from everything else.

      You’d have to use a different method, such as the methods TFTCentral or other monitor review sites use.

      Good news is, G-SYNC monitors in general are very low lag, usually <10ms for the 100Hz variety, and <5ms for the 144Hz and up panels.

      That said, the Chief Blur Buster knows more about these other methods than I, and should be able to answer your question in more detail, so feel free to start a thread in the input lag sub forum:
      http://forums.blurbusters.com/viewforum.php?f=10

  5. hiroshimus says:

    I just got a BenQ Zowie XL 2720.

    I’m only gonna use that for Xbox One and PS4.

    I don’t play pc at all.

    I don’t have a windows pc or laptop available to install the BenQ CD that came with the monitor and everything I read about all the settings I don’t understand quite well, there’s just a lot of information.

    Can someone just tell me what settings should I put on my monitor to get the lowest input lag possible to play overwatch on xbox one? Pleaseee I’d really appreciate it!

    • jorimt says:

      The Xbox One and PS4 are closed platforms, so they have no settings to reduce input lag, and while you’re XL2720 supports up to a 144Hz refresh rate over the DisplayPort (PC only), both your consoles and that monitor only support a maximum 60Hz output over HDMI.

      You’re XL2720 does however have an input lag reduction option called “Instant Mode,” which can be enabled via “Picture > Instant Mode > On” in the monitor’s OSD.

      That’s about all you can do to ensure lowest lag possible with your consoles on that display; the rest is up to the developers and how they optimize engine latency, V-SYNC settings, and input reads.

      This subject is already off topic for this article, so if you have any further questions, feel free to start a thread in our forums where I (and others) will be happy to answer them:
      http://forums.blurbusters.com/

  6. elies says:

    one of the best article I have read, Thank u so much.
    I was confused why I have tearing when I get close to my 144@ monitor FPS, even with fps limiter, now I have the answer and the best solution from your article.
    BEST BEST BEST!!!

      • elies says:

        Please question
        In battlefield 1, we can create a config file to limit the fps which is better than RTSS as you mentioned above. Now it is better to set the render ahead limit in the config file in battlefield 1 or use max pre render frame in Nvidia cp which is the same?

        • jorimt says:

          Yes, use the in-game FPS limiter.

          As for the game’s “RenderDevice.RenderAheadLimit” vs NVCP’s “Maximum Pre-rendered Frames,” I just did a test with the in-game setting…

          i7-4770k/1080 Ti, max settings at 1440p:

          FPS in 170’s set to “-1” (default)
          FPS in 170’s set to “0”
          FPS at 100 (!) set to “1”
          FPS in 170’s set to “2”

          I don’t know what impact the RenderAhead setting has on input lag (this subject will have to wait for a dedicated article, no ETA), but I lose over 70 FPS (!) with it set to “1” in-game, so I wouldn’t recommend that.

          I also tested RenderAhead “-1” with NVCP’s MPRF at “1,” and got FPS back in the 170’s again. This mean that either MPRF has no effect in this game, or RenderAhead functions differently from MPRF.

          So I’d recommend using the game’s default setting of “-1,” and experimenting with MPRF instead (may or may not make a difference for input lag depending on your setup and sustained framerate).

          • elies says:

            Thank u so much for everything. now i can enjoy battlefield 1. keep this amazing job.
            btw i am playing max pre render frame and render ahead to 2 in battlefield 1, i found it is the best combo for input lag and minimize CPU spikes.
            my system:
            i7 6900K 4.3
            GTX 1080TI
            32 GB RAM 3200
            S2417DG monitor

Add Comment