Upscaling or reconstructing frames for video games in real time is a fairly controversial practice. Purists balk at the idea, but users with a “weak” or mid-range gaming system appreciate the extra fluidity it brings. NVIDIA does it. This also applies to AMD. And Intel too. But when Nvidia announced the next version of its supersampling technology, all hell broke loose, particularly due to the overly AI-powered look of the images, particularly human faces.
It’s been a wild few weeks in the tech world, and if you’ve been following the DLSS 5 (Deep Learning Super Sampling) saga, you know it’s been a roller coaster ride of “wow”, “wait, what?” and “Take that thing out of my game.” Here’s the breakdown of the DLSS 5 drama, from the leather jacket hype to the current “2D filter” reality.
The story so far: The “GPT moment” that didn’t exist
It all started when Jensen Huang took the stage at NVIDIA’s GTC 2026 and dropped the bombshell: DLSS 5. NVIDIA wasn’t just scaling up the pixels anymore; They reinvented them generatively. Jensen called it the “GPT moment for graphics,” promising that AI would now handle the heavy lifting of visual realism: things like skin texture, fabric shine, and complex lighting. Unfortunately, the hype didn’t even last 24 hours.
Within hours, the internet was flooded with side-by-side comparisons of Resident Evil Requiem and Starfield. The community’s reaction? “AI slop.” Instead of making games look “better,” DLSS 5 “Jassified” characters by smoothing out grainy skin textures, adding unintentional makeup, and making everyone look like a 2022 Instagram influencer.
Then came the “betrayal”. As reported by Insider GamingBig game developers were caught off guard. Artists at Ubisoft and Capcom reportedly found out about the DLSS 5 demos at the same time as us. NVIDIA tried to limit the damage and promised a “Full Creative Control” SDK with intensity controls. But the final blow came just a few days ago: an email interview between YouTuber Daniel Owen and NVIDIA’s Jacob Freeman revealed that DLSS 5 doesn’t actually take advantage of the game’s deep 3D geometry. It’s essentially a high-end 2D post-processing filter that’s overlaid on the screen. The “Neural Revolution” turned out to be a very expensive paint job.
Why “better” isn’t always better
On paper, DLSS 5 sounds like magic. And in a way it is. When looking at a landscape or static environment, the AI-powered shadows and highlights objectively look “cleaner.” But here’s the problem: the mood isn’t always cleaner.
Video games are art, and art is about intention.
If a developer spends three years perfecting a blurry, gloomy, claustrophobic hallway in a horror game, they don’t want an AI to come in and “fix” it.
DLSS 5 has a habit of brightening dark corners and wiping away atmospheric fog because it thinks these are “mistakes” that need to be corrected. The fact that the developers were surprised by the demo is the biggest red flag. It’s a classic corporate hierarchy: The guys at the top say “yes” to NVIDIA for the marketing buzz, while the actual creative teams remain in the dark. If NVIDIA had actually worked with the artists instead, it could have provided the AI with 3D data models and blueprints.
Imagine if AI knew exactly where a character’s scar should be or how a certain fabric should reflect light. Actually, as Veedrac on Reddit Recently introduced games with DLSS 5 with tone mapping actually look stunning. It has been proven that the technology can work, but only if a human is piloting the ship. By introducing it as a “black box” filter, NVIDIA has essentially bypassed the very people who create games worth playing.
On the other hand, there is still the elephant in the room: data sovereignty. As a creative designer, why would it be okay for me to hand my raw character designs and lighting maps to an AI model? We’ve seen how this works. The AI uses this data to “learn” and eventually builds things based on your hard work without you being in the loop. It’s a legitimate fear that NVIDIA is developing a master engine that could one day make the “Artist” part of “Game Artist” optional.
The future awaits
Is DLSS 5 dead on arrival? Probably not. If history tells us anything, it’s just NVIDIA’s standard operating procedure: break it first, fix it later. Look back to 2018: Ray Tracing launched, our frame rates increased, and things looked “good” at best. Today? It’s the gold standard. In 2022 they gave us Frame Generation and we all laughed at the “wrong frames”. Now? It’s practically the only way to achieve playable 4K.
Don’t get me wrong, I honestly would take raw, native rasterization over this AI mess any day. I want my games to play real, without digital links. But that’s just not the world we live in. NVIDIA owns 95% of the market, as reported by Jon Peddie Researchmeaning that whatever they introduce, be it good, bad or ugly, eventually becomes the blueprint of the industry.
DLSS 5 is currently stuck in the “uncanny valley” phase. It’s cumbersome, over-aggressive, and currently vilified as a glorified 2D filter. But at some point NVIDIA has to realize that they can’t treat a game like a flat video file. The promised SDK needs to be more than just a slider; It must be a bridge that allows developers to express their artistic soul. Once DLSS 5 learns to respect “mood” as much as “pixels,” it will change gaming forever.
And we know how it ends: the industry follows NVIDIA like clockwork. We can complain all we want today, but in two years we’ll probably be debating whether AMD’s “FSR 5” is as good at “repainting” characters as Team Green. Technology is inevitable. We just have to make sure that the art doesn’t get lost in the high style.




