Image upscaling is no longer an optional “extra” in modern PC gaming: it has become a fundamental part of performance, especially in major releases. Technologies like DLSS (NVIDIA) and FSR (AMD) are now almost standard in AAA titles—and increasingly in AA and indie games—because they enable high resolutions and high refresh rates without overly taxing the GPU per frame. The catch is that this same “relief” on the graphics card can shift the pressure onto another component: the CPU.
The reasoning is straightforward. When an upscaler is activated, the GPU works internally at a lower resolution than output. This reduces render time per frame and allows more frames per second. However, to sustain that frame rate, the GPU needs the CPU to deliver instructions (draw calls, game logic, physics, asset streaming, etc.) more quickly. If the CPU can’t keep up, the situation changes abruptly: the GPU is no longer the bottleneck, and performance becomes processor-limited, even when gaming at 1440p or 4K on the monitor.
This undermines a common belief: that “beyond 1440p, the CPU doesn’t matter anymore.” In a typical native 4K scenario, many games show little CPU scaling with better CPUs because the workload is predominantly on the GPU. But the analysis underlying these tests reminds us that in the era of upscaling, native 4K isn’t necessarily the main gaming mode. When internal resolution drops low enough, the CPU reclaims its place at the forefront of performance.
Why DLSS Can “Downgrade” Your Game Below 1080p Without You Noticing
The key lies in the percentage of internal rendering. In DLSS’s Quality mode, the render scale is around 66.7%, while in Performance mode it drops to 50%. This affects not only the final frame rate but also the type of load on the system.
The internal resolution scheme makes this clear:
- With output 3840 × 2160 (4K), DLSS Quality internally renders at 2560 × 1440, and DLSS Performance at 1920 × 1080.
- With output 2560 × 1440, DLSS Quality drops to 1706 × 960, and DLSS Performance to 1280 × 720.
- With output 1920 × 1080, DLSS Performance may set the internal resolution at 960 × 540.
Practically speaking: playing “at 1440p with DLSS” can mean the GPU is actually processing much closer to 720p, giving more room for higher FPS—until the CPU hits its limit.
The purpose of this analysis: When does “CPU scaling” disappear with DLSS?
Instead of focusing solely on the obvious case—CPU bottleneck at very low internal resolutions—the question is more subtle: As output resolution with DLSS increases, when does the difference between CPUs become unnoticeable? In other words, at what point does the upscaling reassert the GPU as the dominant limit, making CPU upgrades barely affect the fps?
To explore this, tests are designed to find a balance: first establishing scenarios clearly GPU-bound, then observing when relevant CPU differences reappear as DLSS is activated.
Test Bench: An RTX 4080 Super and Four CPUs to Measure the Effect
The described testing environment uses a RTX 4080 Super paired with a selection of Intel and AMD CPUs:
- Intel Core i5-14400
- Intel Core i7-14700K
- AMD Ryzen 5 9600X
- AMD Ryzen 5 9850X3D
Additional specs include MSI motherboards MSI MPG Z790 Carbon Wi-Fi (LGA1700) and MSI MPG X870E Carbon Wi-Fi (AM5), liquid cooling Corsair iCUE Link H150i Elite Capellix, 2 TB storage Sabrent Rocket 4 Plus, DDR5-6000 memory (2×16 GB), and a MSI MPG A1000GS power supply. Gaming settings employ a mix of High/Ultra.
The focus is on two “standard” reescalado modes in real-world use:
- DLSS Quality (66.7%)
- DLSS Performance (50%)
“Ultra Performance” mode is mentioned as optional and not available in all titles, so it’s outside the core comparisons.
DLSS Is Not All the Same: Versions, Presets, and the Transformer Model Leap
Another important point is that DLSS varies not just by “Quality/Balanced/Performance,” but also by DLSS version and the preset chosen by the developer. These presets can prioritize sharpness, stability, performance, or artifact control.
Examples include:
- Cyberpunk 2077: DLSS v310.1.0, Preset J
- Doom: The Dark Ages: DLSS v310.2.1, Preset K
- Flight Simulator 2024: DLSS v310.1.0, Preset E
- Marvel’s Spider-Man 2: DLSS v310.1.0, Preset J
- The Last of Us Part One: DLSS v3.1.2, Preset A
The article explains that presets A–F correlate to the CNN (convolutional neural network) model from earlier generations, whereas presets J and K use the first iteration of the Transformer model, which usually offers higher quality at increased processing cost. In this context, Preset K is described as more stable but less sharp, while Preset J is sharper but less stable.
Additionally, DLSS 4.5 (second-generation Transformer) uses FP8 instructions, and RTX 20 and RTX 30 series GPUs do not support FP8 in their Tensor Cores—which can lead to performance penalties with newer presets like L or M. To avoid biasing conclusions about CPU performance, the analysis prefers to stick with each game’s default settings.
What Should Gamers Take From This? CPU Returns to Focus When Pursuing Many FPS
The key message isn’t that upscaling is “bad,” but that it shifts the balance of the PC. DLSS can turn a clearly GPU-bound scenario into one where the GPU is no longer the bottleneck, and the game begins to scale with faster CPUs.
This is especially true in three common situations:
- High-refresh-rate monitors (120/144/240 Hz), where each additional frame demands more from the processor.
- DLSS Performance mode (or aggressive scaling), which lowers internal resolution so much that the GPU is “overpowered” earlier than expected.
- Games with high CPU load (simulations, complex open worlds, intense streaming, NPC AI), where FPS ceilings typically depend on the main thread performance.
In practice, this means reconsidering simplistic advice like “for 4K, CPU doesn’t matter.” It might be true for native 4K gaming, but ceases to be accurate when gaming at 1440p with internal rendering at 1920 × 1080. Upscaling not only improves smoothness but can also reveal performance limits that were previously hidden.
FAQs
Why can DLSS increase the CPU bottleneck in games at 1440p or 4K?
Because it reduces the internal render resolution, the GPU requires less time per frame, which shifts the workload onto the CPU to provide instructions faster to keep FPS stable.
Which DLSS mode tends to cause more CPU limitation: Quality or Performance?
Generally, Performance mode is more prone to this, as it renders internally at lower resolutions (e.g., 1280×720 at 1440p), relievi…

