Proprietary AI products as well as product development and R&D services for non-standard and science intensive tasks.
March 19, 2026 · Bogdan Sevriukov
When I moved to a new apartment, there was a large modern TV there. I'm a film person, and I decided to finally watch Taxi Driver properly — on a big screen rather than a monitor.
After a few minutes I was watching through gritted teeth and couldn't understand why. My first thought was uncomfortable: maybe I just remembered the film wrong? Maybe it was shot worse than I thought, and the big screen had finally exposed the flaws?
But this is Scorsese and Michael Chapman.
And then I understood what it reminded me of. As a child in the early nineties, Soviet TV often broadcast theatre performances — stage productions shot on video, with flat studio lighting. Something alive, but not cinema. Taxi Driver on the new television looked exactly like that.
Americans call this the soap opera effect. For me it was a televised theatre play. Different cultures, the same video texture — and the same feeling: cinema had become cheaper than it is.
In the mid-2000s, with the arrival of HD televisions, manufacturers faced a problem: LCD panels (unlike the old CRTs and plasmas) suffered from motion blur — smearing during fast movement. The solution was motion interpolation — a technology that inserts synthetic intermediate frames between the original ones, artificially raising the frame rate from 24 fps to 60 or 120 fps.
Every manufacturer gave it their own name:
| Brand | Name |
|---|---|
| Samsung | Auto Motion Plus |
| LG | TruMotion |
| Sony | MotionFlow |
| TCL | Action Smoothing |
| Vizio | Smooth Motion Effect |
The term "soap opera effect" isn't a metaphor — it's a literal reference. Soap operas got their name from advertisers — soap manufacturers who sponsored daytime shows for housewives in America (the first is considered to be Painted Dreams in 1930; the longest-running was Guiding Light — on radio from 1937, on TV from 1952). These shows were shot cheaply: on videotape, at 30/60 fps, with flat studio lighting. Cinema, by contrast, was shot on expensive film at 24 fps.
Over decades, audiences developed a subconscious association:
When a television artificially raises a film's frame rate from 24 to 60, it reproduces exactly the second visual language. The Lord of the Rings starts to look like a talk show.
Motion blur is a tool, not a defect. At 24 fps, each frame is exposed for approximately 1/48th of a second (the 180° shutter angle rule). During this time, moving objects are slightly blurred. This is intentional:
Flickering light is calibrated to a specific frame rate. Candles, neon signs, strobes — the cinematographer sees their behaviour at 24 fps. On synthetic intermediate frames, the algorithm creates non-existent brightness transitions — the flicker may disappear or become chaotic.
Lens flares. As the camera moves, a flare travels across the frame. Interpolation "multiplies" it — ghosting and phantom copies appear.
Chiaroscuro lighting. Fincher, Deakins, Lubezki build their frames on sharp light/shadow transitions. When an actor moves, the rhythm of those transitions is designed for 24 frames. Synthetic intermediate frames create shadow positions that never existed in reality.
The interpolation algorithm doesn't understand scene depth — it works with 2D pixels:
An editor cuts a film with frame-level precision. At 24 fps, one frame = ~42 ms. A fast cut = tension. A long take = contemplation. When the TV inserts intermediate frames, a hybrid frame appears at the cut point — containing elements of both the preceding and following shots. The sharpness of the edit is blurred.
Christopher Nolan and Martin Scorsese spoke publicly against motion interpolation. Tom Cruise recorded a video message urging people to turn off the setting. In response, the UHD Alliance created Filmmaker Mode — a mode that automatically disables all post-processing, including motion smoothing, to display content as the author intended.
| Version | Year | What It Does |
|---|---|---|
| DLSS 1 | 2019 | Upscaling — render fewer pixels, AI fills in up to the target resolution |
| DLSS 2 | 2020 | Improved upscaling with temporal data |
| DLSS 3 | 2022 | + Frame Generation — synthetic intermediate frames (a direct analogue of motion interpolation) |
| DLSS 3.5 | 2023 | + Ray Reconstruction — AI replaces the denoiser for ray tracing |
| DLSS 4 | 2025 | Multi Frame Generation — up to 3 synthetic frames per 1 real frame |
| DLSS 5 | Autumn 2026 | Neural Rendering — AI redraws the entire image with photorealistic lighting |
Each version took more control away from the GPU (and from the artist):
Frame Generation in DLSS 3 and Multi Frame Generation in DLSS 4 are a direct analogue of TV interpolation, but with an important advantage: access to motion vectors (the GPU knows exactly where each object is moving) and a depth buffer (real depth information). The TV guesses from 2D pixels — DLSS knows the 3D structure of the scene.
But problems remain:
NVIDIA presented DLSS 5 at GTC on March 16, 2026. Jensen Huang called it "the GPT moment for graphics." Release: autumn 2026, exclusive to the RTX 50-series.
DLSS 5 is not upscaling and not frame generation. It is real-time neural rendering. The AI model:
The geometry and textures formally remain original. But the light, shadows, reflections, and material behaviour — all of it is recalculated by the neural network.
Tom's Guide conducted a frame-by-frame analysis: approximately 60% of changes to character faces are explained by improved lighting and material depth. The remaining 40% is neural rendering that adds "fuller lips and sharper jaw lines." According to Remio AI, the complete DLSS pipeline (upscaling + frame generation + neural rendering) replaces up to 95% of pixels from the original frame.
The GTC demo required two RTX 5090s — one rendering the game, the second entirely occupied with neural rendering. NVIDIA promises to optimise down to a single card by launch, but journalists and analysts express serious doubts.
NVIDIA provides controls through the Streamline SDK:
Bethesda in an official statement following the wave of criticism: "This will all be under our artists' control, and totally optional for players."
Having sliders and masks doesn't solve the fundamental problem. Critics point out:
Steve Karolewics, rendering engineer at Respawn: "DLSS 5 looks like an overbearing contrast, sharpness, and airbrush filter. Remarkably different frames with the rationale of photo-real lighting? Nah, I think I'll stick with the original artistic intent."
Jeff Talbot, concept artist: "This is NOT the direction games should be going in. In every shot, the art direction was taken away for the senseless addition of 'details'. This is just a garbage AI filter."
Danny O'Dwyer, documentarian: described the result as "yassified, looks-maxed freaks."
PC Gamer collected developer reactions under the headline "Bad ending: now every game is slop." Engadget: "Gamers are right to be disgusted by NVIDIA's DLSS 5." YouTube comments under the announcement were almost 100% negative.
Specific examples from the demo:
Not everyone is opposed. Georgian Avasilcutei, an industry veteran (Remember Me, Dishonored 2, Hogwarts Legacy), defended DLSS 5, posting the Dunning-Kruger chart and claiming that critics are at the "peak of ignorance." His argument: a person's face looks radically different depending on angle and lighting — every photographer knows this. DLSS 5 simply gives real-time rendering the quality of light that was previously only available in offline rendering. The algorithm doesn't hallucinate new objects — it reconstructs lighting based on existing geometry.
Digital Foundry was also divided: founder Richard Leadbetter called the demo "one of the most striking in a long time," particularly the material processing — metal, fabric, fruit skin, the behaviour of light in foliage. But opinions within the editorial team diverged.
Jensen Huang in a press interview with Tom's Hardware at GTC: "Well, first of all, they're completely wrong." He stated that DLSS 5 combines control over geometry and textures with generative AI, and that all of it is under "direct developer control."
| TV Motion Interpolation | DLSS 3/4 Frame Gen | DLSS 5 Neural Rendering | |
|---|---|---|---|
| What it generates | Intermediate frames | Intermediate frames | The entire image |
| Input data | 2D pixels (optical flow) | Motion vectors + depth buffer | Colour + motion vectors + scene semantics |
| What it breaks | Rhythm, blur, editing | Latency, UI/HUD | Lighting, materials, faces, style |
| Scale of intervention | Adds frames | Adds frames + pixels | Rewrites ~95% of pixels |
| Author's control | None (Filmmaker Mode = turn it off) | Minimal | Masking, intensity, color grading |
| Industry response | Nolan, Scorsese, Tom Cruise against | Debate about latency | Mass backlash, polarised reaction |
| Manufacturer's position | "Improves the picture" | "More fps" | "The GPT moment for graphics" |
The evolution is singular: an increasingly aggressive AI intermediary stands between author and viewer. The TV added frames — it was uncomfortable, but reversible (turn off the setting). DLSS 5 rewrites the image itself — and although it too can be disabled, market pressure (NVIDIA pushing it, publishers supporting it, RTX 50 exclusivity) creates an environment in which "turning it off" becomes an ever less default choice.
The meme featuring Dorothea Lange's photograph "Migrant Mother" — a desperate woman transformed by AI into a smiling, made-up one, captioned "Nvidia presents DLSS 5" — is an exact metaphor: the technology doesn't improve — it reinterprets, substituting the author's intent with its own idea of what is "better."
As PCWorld put it: "Games are art, and art has purpose. If the GPU simply generates AI-generated content that neither the user nor developer asked for, doesn't that detract from the experience?"
When I finally found and turned off that setting, Taxi Driver came back to where it belonged. Artists received constraints — frame rate, film format — and mastered them. They knew what the viewer would see. Reinterpreting someone else's work can be art — Tarantino reinterprets and creates a second, independent work. But only a handful of people are capable of that, and the result is always a different work, not an improved version of the original. You cannot mass-produce that. DLSS 5 is trying to. To call it an improvement is to not understand what an original is.
Author: Bogdan Sevriukov