DLSS vs Non DLSS

ChrisUlrich

Weaksauce
Joined
Aug 4, 2015
Messages
117
I have heard a lot of conflicting information regarding the use of DLSS.

Some people say that using DLSS 2.0 or 3.0 (3.0 for me soon as I just got a 4090) barely hurts visuals. Not noticeable at all.

Others say that you can't experience "true 4K" with DLSS. It just hurts the quality too much.

Just want to talk about it with some people. I always use DLSS since I can't run anything at 4K with my 3080 unless i have that enabled. But it seems that I won't even be able to run games on Ultra at 4K even with the 4090. DLSS is a must if I want to achieve 4k @ 120FPS.

Are you better off living at 60fps without DLSS or 120fps with DLSS?
 
Couple thing that influence it:

1) What level of DLSS are we talking? Quality mode really hurts image quality very little. In fact some parts look "better than native" since it can reconstruct small details that would otherwise be lost. However performance mode doesn't look nearly as good, much less detail and more artifacts, owing to the lower amount of data.

2) Which game? Different games implement it better or worse. Some really nail the implementation, some fuck it up. So some games it looks better in than others.

In general, I use it in Quality mode unless the game is graphically simple enough I can just max FPS all the time. I'd rather have high visual quality and good FPS than turn DLSS off.
 
I found that at 4K output resolution on my LG C1 48" OLED, DLSS Balanced and Performance looks acceptable. Below 4K, like at 3440x1440 on my X34P, I'll just stick with DLSS Quality or DLAA since Balanced and Performance looks pretty bad at that output resolution.

If I have excess GPU performance, then DLDSR is the way.

Swapping DLSS dlls also really help in mitigating artifacts like ghosting and image instability.
 
Plenty of good videos out there showing DLSS vs FSR vs off.

To summarize:
  • DLSS 1 did not look good.
  • DLSS 2 and later versions became usable with the Quality setting; anything else sacrifices too much image quality.
  • Each game looks a bit different, some games it works better than others in terms of having less visually noticeable graphical issues. Call of Duty Modern Warfare 2 for example has a lot of noise when using it, and quick aiming with red dots has some minor ghosting.
  • The major issue with DLSS 2 is things like fences or windows in the distant skyscrapers in The Last of Us. It often has that annoying shimmering effect, much like some types of AA (or lack of AA) used to have.
  • Annoying when in motion.
  • I also found the facial hair on Star Lord to becomes thinner and semi transparent in Guardians of the Galaxy when using DLSS, but I was able to enable ray tracing with DLSS. Overall I think the ray tracing added more to the visual experience than the facial hair.
  • The upcoming DLSS 3.5 seems to fix that, at least in Alan Wake 2. Seem this video example of what I am referring to:

View: https://www.youtube.com/watch?v=HwGbQwoMCxM

  • DLSS 3, or frame generation (stupid naming scheme) seems to work okay but I only used it in one game. I did not notice any issues and it is probably fine for most single player games. The overall smoothness in Ratchet & Clank was an improvement for me. Going from 50-60, then 80 or so with DLSS 2.5(?) to a solid 120 with both DLSS and FLSS frame gen on looked a lot smoother to be.
  • Native is best, if you're getting 100+ frame rates stay at native. If your frame rates drop below that, check and see if the minor image quality hit is worth the gain in frame rates. For me it typically is. The upcoming DLSS 3.5 seems to cure the main shortcoming I had with DLSS up-scaling, which was how fences and similar patterning would look blurry in the distance.
 
I appreciate all the feedback!

Right now I am playing Remants 2, Cyberpunk (original), and Elden Ring.

I am currently using a 3080 until the 4090 comes in. Cyberpunk thrashes this poor thing at 4K.
 
I would never say DLSS is better than native. I'd rather have a 4k output native with higher PPI than DLSS TBH.

The fact is that DLSS Quality mode is near being as good as native, but will never be 100% there. If this was the case why would Nvidia offer DLAA?

That being said, I use DLAA in every game that offers it where I don't need the performance improvement of DLSS. It gives you the benefit of native + the amazing free AA of the machine learning.
 
DLSS is better than native now, that is the fact of the matter...

no...you can't make a blanket statement like that...in some games DLSS is better than native but the majority are not...even if DLSS has closed the gap it's still not there...the problem now as we've seen with recent games like Remnant 2 is that developers are using DLSS as a crutch for poor optimization...I see that only getting worse
 
no...you can't make a blanket statement like that...in some games DLSS is better than native but the majority are not...even if DLSS has closed the gap it's still not there...the problem now as we've seen with recent games like Remnant 2 is that developers are using DLSS as a crutch for poor optimization...I see that only getting worse
All of this is discussed in the DF video posted. DLSS is better than native in most modern games (happier now?), even ones that have no DLSS support out of the box, look at Starfield. The image quality is temporally more stable when using DLSS at all resolutions when compared to native and other upscalers (FSR in ref to Starfield). DF also has a video demonstrating this.

People might not like it, but that is the truth, and it benefits gamers all over who run RTX cards. AMD should be pushed to put greater focus on adopting AI upscaling as they have been lagging behind and are now at least 2 generations behind, they can't even get their latest FSR version in the very AAA game that they sponsor. AMD are also very cagey when it comes to answering questions from the tech world about all this, whereas Nvidia seem to be fully transparent and open to answering any question as shown by the DF roundtable video.
 
All of this is discussed in the DF video posted. DLSS is better than native in most modern games (happier now?), eve ones that have no DLSS support out of the box, look at Starfield. The image quality is temporally more stable when using DLSS at all resolutions when compared to native and other upscalers (FSR in ref to Starfield). DF also has a video demonstrating this.

People might not like it, but that is the truth, and it benefits gamers all over who run RTX cards. AMD should be pushed to put greater focus on adopting AI upscaling as they have been lagging behind and are now at least 2 generations behind, they can't even get their latest FSR version in the very AAA game that they sponsor.

no one is debating that Starfield looks best with DLSS...but to say that all or the majority of games look better with DLSS versus native is incorrect...how many games did DF test?
 
no one is debating that Starfield looks best with DLSS...but to say that all or the majority of games look better with DLSS versus native is incorrect...how many games did DF test?
It is not incorrect, it is absolutely correct. DF test all games that come out. Just look through their catalogue of tech reviews when new games launch every time. That and the countless other channels that do side by side comparisons of DLSS vs FSR vs Native tests in their videos.

Edit*
It's also worth pointing out that replacing the DLSS dll file with the latest build in older games that have an old dll version gets better sharpening and image quality/ghosting fixing. That's the benefit of DLSS, with FSR you cannot do that as AMD don't have a drop-in dll file-replace ability in their upscaler. It can be done with XeSS however.

And now with DLSS 3.5, there is no debate to be had, it is simply impossible for ray traced games to look better at native than with DLSS3.5. The first few games supporting it are out within the next few weeks as well.
 
Last edited:
It is not incorrect, it is absolutely correct. DF test all games that come out. Just look through their catalogue of tech reviews when new games launch every time. That and the countless other channels that do side by side comparisons of DLSS vs FSR vs Native tests in their videos.

Is DLSS Really "Better Than Native"?- 24 Game Comparison, DLSS 2 vs FSR 2 vs Native


View: https://www.youtube.com/watch?v=O5B_dqi_Syc
 

Attachments

  • DLSS.png
    DLSS.png
    736 KB · Views: 0
DLDSR + DLSS or DLAA is the truth. Beats everything else, every time.

Native res + TAA is dead to me on PC.
Issue is many games don't let you turn off the TAA anymore. I agree native + TAA is awful.

That being said, most times I far prefer just raw native with a higher PPI display (27'' 4K as an example) where the aliasing at that point is pretty much a non-issue.

But yes - If I had to choose ONLY between TAA or DLSS I'd choose DLSS.
 
Issue is many games don't let you turn off the TAA anymore. I agree native + TAA is awful.

That being said, most times I far prefer just raw native with a higher PPI display (27'' 4K as an example) where the aliasing at that point is pretty much a non-issue.

But yes - If I had to choose ONLY between TAA or DLSS I'd choose DLSS.

DLDSR + TAA is completely fine too IMO.
 
The fact is that DLSS Quality mode is near being as good as native, but will never be 100% there. If this was the case why would Nvidia offer DLAA?
DLAA could be even better....

DLSS could be significantly better at many things over native, if it understand that something is an electric line, hairs, a staircase, alphanumerical character you can end up with something better than native rendering (higher the original signal the better, thus DLAA being useful). Those model are trained with much better dataset than video game (at 16K resolution render for example and offline rendering movie type of render)

I am not sure it will ever be a case that DLSS is clearly always less good, better or equal to native but more case by case at how good it will be at something's, with possible artifact issue always possible and how well engine rendering tech evolve outside those black box upscaler.

For example:
nvidia-dlss-3-5-ai-ray-tracing-pr-2.jpg


In some aspect the DLSS version look way better than native (in the sense of closer to what the dev want at least, always subjective), could be promo of cherry picking the the Cyberpunk guy seem quite honest when he say the game look way better without doubt with dlss on than native now.

Game engine that use Pure real time Neural network generated with zero rendering to work (they just say to the AI a red sedan car down a road) are starting to look good and could one day look better, starting from an actual game render that just at a lower resolution will be a piece cake to look better than native in comparison.
 
Last edited:
I've been long gaming on a 77" OLED and a 65" before that, and I must say DLSS is a Godsend. Results vary, but I'm usually pleased to see it and usually turn it on.
 
That being said, most times I far prefer just raw native with a higher PPI display (27'' 4K as an example) where the aliasing at that point is pretty much a non-issue.
As someone who's used 27-28" 4k panels since 2014, I still notice a lot of aliasing. It's a misnomer to claim it cures all aliasing by sheer resolution.
 
I've been long gaming on a 77" OLED and a 65" before that, and I must say DLSS is a Godsend. Results vary, but I'm usually pleased to see it and usually turn it on.
As someone who games on a 65" OLED i too can attest to how fucking great DLSS is for 4k gaming. FSR2 is decent, but DLSS is just vastly superior compared to FSR2.

The only thing right now FSR2 has over DLSS is that any video card can use it. Even Nvidia GPU's that dont support DLSS.
 
As someone who games on a 65" OLED i too can attest to how fucking great DLSS is for 4k gaming. FSR2 is decent, but DLSS is just vastly superior compared to FSR2.

The only thing right now FSR2 has over DLSS is that any video card can use it. Even Nvidia GPU's that dont support DLSS.
I still don’t quite understand why nvidia hasn’t just put DLSS 2 as something that can be injected into any game via the NVCP just like AO, etc. Especially if it’s the same game engines. Every UE4 game for instance should be able to have it forced via the panel.
 
I still don’t quite understand why nvidia hasn’t just put DLSS 2 as something that can be injected into any game via the NVCP just like AO, etc. Especially if it’s the same game engines. Every UE4 game for instance should be able to have it forced via the panel.
Same reason they didn't let AMD use PhysX or tried to sell you Gsync modules over freesync back in the day. They want those people to buy into the Nvidia ecosystem. Nvidia is not consumer friendly. But, neither is AMD in a lot of ways.
 
I still don’t quite understand why nvidia hasn’t just put DLSS 2 as something that can be injected into any game via the NVCP just like AO, etc
Probably for the same reason as FSR 2, it need things like motion vector from the game to be provided, ideally for the upscale to be made before some deforming effect like field of view, bloom, blur are applied, etc...
 
I still don’t quite understand why nvidia hasn’t just put DLSS 2 as something that can be injected into any game via the NVCP just like AO, etc. Especially if it’s the same game engines. Every UE4 game for instance should be able to have it forced via the panel.

Because it's impossible. They don't have the data they need, and if they want it, it'd be insane heuristics to effectively hook very particular parts of the renderer that may or may not break the second they patch the engine - it's impossibly fragile and a Sisyphian amount of work.

Getting anything wrong is severely corrupted graphics at absolute best. At worst you've corrupted memory if not crashed. If you're lucky, I'm sure you could stomp something in just the right way to crash the driver.

Which is why the mods proxy the other team's upscaler DLL and masquerade as it. It's the only vaguely sane way to extract the information you need.

SSAO is just a post process. Same as why they could jam FXAA in. With the giant caveat that it affected even things like UI elements because the driver has no clue where and when 2D starts because... it's still just triangles all the same. There's no state to monitor to know when it happens for sure. All they can do is take the pessimistic choice of "at the end of the frame."
 
I still don’t quite understand why nvidia hasn’t just put DLSS 2 as something that can be injected into any game via the NVCP just like AO, etc. Especially if it’s the same game engines. Every UE4 game for instance should be able to have it forced via the panel.
This is kind of what Nvidia Slipstream is supposed to help with. A developer could just call Slipstream and the video card you have would be able to insert it's own tech.
 
This is kind of what Nvidia Slipstream is supposed to help with. A developer could just call Slipstream and the video card you have would be able to insert it's own tech.
No. Slipstream is a plugin platform for game engines. Making it potentially easier for the developer to organize and add tech. Assuming of course the engine you're using doesn't already have that.

Still requires the dev to setup the game to feed a given plugin the data it needs to function. Along with QAing to ensure it's all working as intended.
 
I play with DLSS Quality on and max everything on my 165hz monitor if available and then adjust from there if I need to (and if DLSS is available) - even if I'm playing at only 60FPS cause some single player offline story game not a hack and slash or whatever so not going for high framerate - oh no now my GPU is only using 100watts power because of DLSS for however long I'm playing and it still looks damn good - the horror I should use native res and use and pay for more electricity 👻💀🦇

DLSS is better than native now, that is the fact of the matter.


View: https://youtu.be/Qv9SLtojkTU



So what I took away from this is ray reconstruction only works for fully path traced/RTX titles - otherwise it's just the new and regular IQ improved DLSS 3.5 for all other DLSS (even for non path-traced but still regular ray-tracing) titles

And expect the exclusive DLSS feature of the 5K gen to be particle/volumetrics AI accelerated rendering/reconstruction like with ray reconstruction for path tracing

Edit: So even in Cyberpunk just FYI - I don't have the game but only Overdrive Mode is path tracing or something? You will only get ray reconstruction only in that mode
 
Last edited by a moderator:
I play with DLSS Quality on and max everything on my 165hz monitor if available and then adjust from there if I need to (and if DLSS is available) - even if I'm playing at only 60FPS cause some single player offline story game not a hack and slash or whatever so not going for high framerate - oh no now my GPU is only using 100watts power because of DLSS for however long I'm playing and it still looks damn good - the horror I should use native res and use and pay for more electricity 👻💀🦇




So what I took away from this is ray reconstruction only works for fully path traced/RTX titles - otherwise it's just the new and regular IQ improved DLSS 3.5 for all other DLSS (even for non path-traced but still regular ray-tracing) titles

And expect the exclusive DLSS feature of the 5K gen to be particle/volumetrics AI accelerated rendering/reconstruction like with ray reconstruction for path tracing

Edit: So even in Cyberpunk just FYI - I don't have the game but only Overdrive Mode is path tracing or something? You will only get ray reconstruction only in that mode
No it works on normal RT too, the benefits are better realised on path tracing though due to its severe performance hit, which Ray reconstruction works around at the high end.

RT Overdrive is path tracing yes if using the preset, otherwise it's a manual toggle at the bottom called path tracing which RT overdrive preset enables.

Alan Wake 2 next month supports it too.
 
Edit: So even in Cyberpunk just FYI - I don't have the game but only Overdrive Mode is path tracing or something? You will only get ray reconstruction only in that mode

Yes Ray Reconstruction is only enabled with Overdrive Mode (path tracing)

No it works on normal RT too, the benefits are better realised on path tracing though due to its severe performance hit, which Ray reconstruction works around at the high end.

No that is incorrect...currently Ray Reconstruction only works with path tracing...it doesn't work with standard RT...in the future I'm sure it will but as of today no
 
Yeah I thought I heard them say it explicitly in the video at some time just never went back to check
 
Yeah I thought I heard them say it explicitly in the video at some time just never went back to check

from the Nvidia website:

Q: Why is Ray Reconstruction just available in the RT Overdrive mode of Cyberpunk 2077 2.0 Update and Phantom Liberty?

We focused our efforts to make RT Overdrive look great in Cyberpunk 2077, and we're working with CD Projekt to add support for Ray Reconstruction for other RT modes. Stay tuned

https://www.nvidia.com/en-us/geforc...-35-ray-reconstruction-faq-updated-september/
 
and we're working with CD Projekt to add support for Ray Reconstruction for other RT modes. Stay tuned

Nice, how I thought it would be when first announced

Makes me move the 'AI volumetrics' question a little closer to 'maybe that will come to all RTX cards then too' 👍
 
Nice, how I thought it would be when first announced

Ray Reconstruction will definitely be made available in standard RT mode soon otherwise it makes no sense as currently there are only a handful of path traced games out there
 
Yes Ray Reconstruction is only enabled with Overdrive Mode (path tracing)



No that is incorrect...currently Ray Reconstruction only works with path tracing...it doesn't work with standard RT...in the future I'm sure it will but as of today no
This is incorrect,go see today's hardware unboxed video, they quizzed Nvidia about it and NV said they are training it on normal RT still hence why for now in CP2077 it is intentional that DLSS 3.5 is only enabled when path tracing is also enabled. It's still early days for it but there's no reason why once the AI model is trained, it won't work on all RT variants.
 
you're just repeating what I just said- that RR currently only works in path traced titles...when you earlier said that it works on normal RT in CP2077...it's OK to just admit that you were wrong
The info given by nvidia before was that it works on ALL ray tracing, nobody else until now after the HUB video knew that this would not be the case "for now" until nvidia finish training the AI model so that it does then work on normal RT.

Ray Reconstruction /will/ work on normal RT, it just needs training on normal RT, it's just only trained on path tracing in Cyberpunk for now. I fully suspect next month when Alan Wake 2 launches that it will work on ray tracing and path tracing too.

Just go watch the hardware unboxed video from today to see this very same explanation. It is only for Pt intentionally whilst NV train the AI for normal ray tracing.
 
Ray Reconstruction will definitely be made available in standard RT mode soon otherwise it makes no sense as currently there are only a handful of path traced games out there

Yeah as soon as they can though they're gonna push from ray tracing > path tracing just because of the simpler overall pipeline and involvement needed

Same as how ray tracing is an overall simpler pipeline and less hacks and tricks needed vs rasterized lighting techniques and points

I thought they might have been 'forward excluding it' for that reason or something. I know we're nowhere close to it any time soon, but as a technical demo more than gaming feature as these RTX titles like Quake 2 RTX and Portal RTX and Cyberpunk - a kinda unofficial RTX title - are half and half game/tech demo)

Path tracing itself has been pretty impressively improving for what it is if you think back to the RTX 2K unveil and Quake 2 RTX
 
The info given by nvidia before was that it works on ALL ray tracing, nobody else until now after the HUB video knew that this would not be the case "for now" until nvidia finish training the AI model so that it does then work on normal RT.

Ray Reconstruction /will/ work on normal RT, it just needs training on normal RT, it's just only trained on path tracing in Cyberpunk for now. I fully suspect next month when Alan Wake 2 launches that it will work on ray tracing and path tracing too.

Just go watch the hardware unboxed video from today to see this very same explanation. It is only for Pt intentionally whilst NV train the AI for normal ray tracing.

people have known since DLSS 3.5 was announced that RR would initially only work in path traced titles...the Nvidia link I posted a few posts back was from 1 month ago...it has nothing to do with today's Hardware Unboxed DLSS 3.5 Ray Reconstruction video

if you weren't aware then that's a different story but the info has been out there
 
Last edited:
people have known since DLSS 3.5 was announced that RR would initially only work in path traced titles...the Nvidia link I posted a few posts back was from 1 month ago...it has nothing to do with today's Hardware Unboxed DLSS 3.5 Ray Reconstruction video

if you weren't aware then that's a different story but the info has been out there
The link you posted says it was updated on September 19th.


I"m not sure the it was known that RR would only be available for Path Tracing mode, at launch.

Nvidia's marketing from a couple of weeks ago, did not say anything about that. And the long, super detailed Digital Foundry video from a day or two ago, where they had a major AI Dev from Nvidia: that video did not say anything about RR only being available for path tracing.

IMO, it was a bait and switch to try and get people to buy.
 
The link you posted says it was updated on September 19th.


I"m not sure the it was known that RR would only be available for Path Tracing mode, at launch.

Nvidia's marketing from a couple of weeks ago, did not say anything about that. And the long, super detailed Digital Foundry video from a day or two ago, where they had a major AI Dev from Nvidia: that video did not say anything about RR only being available for path tracing.

IMO, it was a bait and switch to try and get people to buy.

I don't remember when exactly I heard about it but it wasn't today...the info was out there...was it announced the same day as the official DLSS 3.5 announcement- probably not...but the info has been out there...bait and switch to get people to buy a 40 series card?...I'm not sure that would be something that would totally flip the narrative and get people that haven't already bought a 40 GPU to all of a sudden buy one
 
Back
Top