EN RU

Proprietary AI products as well as product development and R&D services for non-standard and science intensive tasks.

Contact Us

From Soap Opera Effect to DLSS 5: How Image Enhancement Technologies Destroy Artistic Intent

March 19, 2026  ·  Bogdan Sevriukov

Get in Touch
SEE MORE AND SUBSCRIBE:
DLSS 5 OFF vs DLSS 5 ON — Pac-Man
Meme by @realmaddox, which became a symbol of DLSS 5 criticism: classic Pac-Man vs "photorealistic" Pac-Man with stubble and a tongue

When I moved to a new apartment, there was a large modern TV there. I'm a film person, and I decided to finally watch Taxi Driver properly — on a big screen rather than a monitor.

After a few minutes I was watching through gritted teeth and couldn't understand why. My first thought was uncomfortable: maybe I just remembered the film wrong? Maybe it was shot worse than I thought, and the big screen had finally exposed the flaws?

But this is Scorsese and Michael Chapman.

And then I understood what it reminded me of. As a child in the early nineties, Soviet TV often broadcast theatre performances — stage productions shot on video, with flat studio lighting. Something alive, but not cinema. Taxi Driver on the new television looked exactly like that.

Americans call this the soap opera effect. For me it was a televised theatre play. Different cultures, the same video texture — and the same feeling: cinema had become cheaper than it is.


Part 1. The Soap Opera Effect — How TVs Turn Cinema Into Cheap Television

What It Is

In the mid-2000s, with the arrival of HD televisions, manufacturers faced a problem: LCD panels (unlike the old CRTs and plasmas) suffered from motion blur — smearing during fast movement. The solution was motion interpolation — a technology that inserts synthetic intermediate frames between the original ones, artificially raising the frame rate from 24 fps to 60 or 120 fps.

Motion interpolation: the algorithm inserts synthetic intermediate frames between originals
How motion interpolation works: top row — original frames with gaps, bottom row — reconstructed intermediate frames. Source: Wikimedia Commons

Every manufacturer gave it their own name:

BrandName
SamsungAuto Motion Plus
LGTruMotion
SonyMotionFlow
TCLAction Smoothing
VizioSmooth Motion Effect

Why It Looks Like a Soap Opera

The term "soap opera effect" isn't a metaphor — it's a literal reference. Soap operas got their name from advertisers — soap manufacturers who sponsored daytime shows for housewives in America (the first is considered to be Painted Dreams in 1930; the longest-running was Guiding Light — on radio from 1937, on TV from 1952). These shows were shot cheaply: on videotape, at 30/60 fps, with flat studio lighting. Cinema, by contrast, was shot on expensive film at 24 fps.

Over decades, audiences developed a subconscious association:

  • 24 fps + motion blur + high-contrast lighting = cinema, expensive, artistic
  • 60 fps + sharp motion + flat lighting = TV show, cheap, everyday

When a television artificially raises a film's frame rate from 24 to 60, it reproduces exactly the second visual language. The Lord of the Rings starts to look like a talk show.

How It Breaks the Lighting

Motion blur is a tool, not a defect. At 24 fps, each frame is exposed for approximately 1/48th of a second (the 180° shutter angle rule). During this time, moving objects are slightly blurred. This is intentional:

  • Directing attention. The cinematographer uses blur to separate the hero from the blurred background during a pan.
  • Artistic effect. Spielberg shot the battle scenes in Saving Private Ryan with a narrow shutter angle — hence the jagged, jerky image that creates the feeling of documentary footage. Interpolation "smooths" this effect, killing the director's intent.
  • Rhythm and weight of movement. A slow camera pan with a slight blur creates a sense of weight. Without it, movement looks "plastic."

Flickering light is calibrated to a specific frame rate. Candles, neon signs, strobes — the cinematographer sees their behaviour at 24 fps. On synthetic intermediate frames, the algorithm creates non-existent brightness transitions — the flicker may disappear or become chaotic.

Lens flares. As the camera moves, a flare travels across the frame. Interpolation "multiplies" it — ghosting and phantom copies appear.

Chiaroscuro lighting. Fincher, Deakins, Lubezki build their frames on sharp light/shadow transitions. When an actor moves, the rhythm of those transitions is designed for 24 frames. Synthetic intermediate frames create shadow positions that never existed in reality.

Artefacts

The interpolation algorithm doesn't understand scene depth — it works with 2D pixels:

  • Halo effect — a ghostly outline around moving objects
  • Morphing — on synthetic frames, faces deform during fast movement
  • Occlusion errors — when an object passes in front of another, the algorithm "invents" the pixels behind it

Destroying Editing Rhythm

An editor cuts a film with frame-level precision. At 24 fps, one frame = ~42 ms. A fast cut = tension. A long take = contemplation. When the TV inserts intermediate frames, a hybrid frame appears at the cut point — containing elements of both the preceding and following shots. The sharpness of the edit is blurred.

The Industry's Response

Christopher Nolan and Martin Scorsese spoke publicly against motion interpolation. Tom Cruise recorded a video message urging people to turn off the setting. In response, the UHD Alliance created Filmmaker Mode — a mode that automatically disables all post-processing, including motion smoothing, to display content as the author intended.


Part 2. DLSS — The Evolution from "Assistant" to "Co-Author"

A Brief History

VersionYearWhat It Does
DLSS 12019Upscaling — render fewer pixels, AI fills in up to the target resolution
DLSS 22020Improved upscaling with temporal data
DLSS 32022+ Frame Generation — synthetic intermediate frames (a direct analogue of motion interpolation)
DLSS 3.52023+ Ray Reconstruction — AI replaces the denoiser for ray tracing
DLSS 42025Multi Frame Generation — up to 3 synthetic frames per 1 real frame
DLSS 5Autumn 2026Neural Rendering — AI redraws the entire image with photorealistic lighting

Each version took more control away from the GPU (and from the artist):

  • First — individual pixels
  • Then — entire frames
  • Then — lighting calculations
  • Now — the entire image

DLSS 3/4: Frame Generation — Motion Interpolation for Games

Frame Generation in DLSS 3 and Multi Frame Generation in DLSS 4 are a direct analogue of TV interpolation, but with an important advantage: access to motion vectors (the GPU knows exactly where each object is moving) and a depth buffer (real depth information). The TV guesses from 2D pixels — DLSS knows the 3D structure of the scene.

But problems remain:

  • Latency. A synthetic frame is based on past data. In fast-paced shooters, controls feel "through cotton wool" — you see 120 fps, but input lag corresponds to 30–60 fps. NVIDIA partially compensates through Reflex, but Frame Generation adds ~6 ms of latency, which is perceptible in competitive scenarios.
  • Artefacts on UI/HUD, particles, and transparencies — the same problems TVs have with subtitles.
  • Ghosting — phantom outlines on sharp camera turns.

Part 3. DLSS 5 — Generative AI Rewrites the Image

What Was Announced

NVIDIA presented DLSS 5 at GTC on March 16, 2026. Jensen Huang called it "the GPT moment for graphics." Release: autumn 2026, exclusive to the RTX 50-series.

DLSS 5 is not upscaling and not frame generation. It is real-time neural rendering. The AI model:

  1. Takes the colour and motion vectors of each frame
  2. Understands the semantics of the scene — recognises skin, hair, fabric, metal, water, foliage
  3. Applies its own photorealistic lighting model to each material type (subsurface scattering for skin, realistic refraction for water, etc.)
  4. Outputs a rewritten frame with "improved" light and materials

The geometry and textures formally remain original. But the light, shadows, reflections, and material behaviour — all of it is recalculated by the neural network.

The Scale of the Changes

Tom's Guide conducted a frame-by-frame analysis: approximately 60% of changes to character faces are explained by improved lighting and material depth. The remaining 40% is neural rendering that adds "fuller lips and sharper jaw lines." According to Remio AI, the complete DLSS pipeline (upscaling + frame generation + neural rendering) replaces up to 95% of pixels from the original frame.

The GTC demo required two RTX 5090s — one rendering the game, the second entirely occupied with neural rendering. NVIDIA promises to optimise down to a single card by launch, but journalists and analysts express serious doubts.

Tools for Developers

NVIDIA provides controls through the Streamline SDK:

  • Masking — excluding specific objects or areas from processing
  • Intensity — adjusting the strength of the effect
  • Color grading — blending, contrast, saturation, gamma
  • Per-scene settings — different parameters for different scenes

Bethesda in an official statement following the wave of criticism: "This will all be under our artists' control, and totally optional for players."

Why the Controls Aren't Enough: The Critics' Position

Having sliders and masks doesn't solve the fundamental problem. Critics point out:

Steve Karolewics, rendering engineer at Respawn: "DLSS 5 looks like an overbearing contrast, sharpness, and airbrush filter. Remarkably different frames with the rationale of photo-real lighting? Nah, I think I'll stick with the original artistic intent."

Jeff Talbot, concept artist: "This is NOT the direction games should be going in. In every shot, the art direction was taken away for the senseless addition of 'details'. This is just a garbage AI filter."

Danny O'Dwyer, documentarian: described the result as "yassified, looks-maxed freaks."

PC Gamer collected developer reactions under the headline "Bad ending: now every game is slop." Engadget: "Gamers are right to be disgusted by NVIDIA's DLSS 5." YouTube comments under the announcement were almost 100% negative.

Specific examples from the demo:

Hogwarts Legacy — DLSS 5 OFF
Hogwarts Legacy: DLSS 5 OFF
Hogwarts Legacy — DLSS 5 ON
Hogwarts Legacy: DLSS 5 ON. The face of a 15-year-old student looks significantly older. Source: NVIDIA
Resident Evil Requiem — DLSS 5 OFF
Resident Evil Requiem: Grace Ashcroft — DLSS 5 OFF
Resident Evil Requiem — DLSS 5 ON
Resident Evil Requiem: DLSS 5 ON. Character features altered by AI. Source: NVIDIA

The Defenders' Position

Not everyone is opposed. Georgian Avasilcutei, an industry veteran (Remember Me, Dishonored 2, Hogwarts Legacy), defended DLSS 5, posting the Dunning-Kruger chart and claiming that critics are at the "peak of ignorance." His argument: a person's face looks radically different depending on angle and lighting — every photographer knows this. DLSS 5 simply gives real-time rendering the quality of light that was previously only available in offline rendering. The algorithm doesn't hallucinate new objects — it reconstructs lighting based on existing geometry.

Digital Foundry was also divided: founder Richard Leadbetter called the demo "one of the most striking in a long time," particularly the material processing — metal, fabric, fruit skin, the behaviour of light in foliage. But opinions within the editorial team diverged.

NVIDIA's Response

Jensen Huang in a press interview with Tom's Hardware at GTC: "Well, first of all, they're completely wrong." He stated that DLSS 5 combines control over geometry and textures with generative AI, and that all of it is under "direct developer control."


Part 4. The Artistic Process: What Exists and What Doesn't

What Exists

  1. SDK controls from NVIDIA: masking, intensity, color grading, per-scene settings. These are real tools that a developer can use.
  2. DLSS preview in Unreal Engine viewport — available via the official NVIDIA plugin. An artist can enable DLSS directly in the editor and see the result.
  3. Player optionality — DLSS 5 can be disabled. Bethesda and NVIDIA emphasise that it's an option, not a mandate.

What Still Doesn't Exist

  1. Predictability. The AI model is a black box. An artist cannot guarantee how the neural network will process a specific material or scene. The model gets updated — behaviour changes. Controls like "intensity" and "masking" are blunt instruments (on/off, more/less), not a precise calibration of exactly how the AI interprets light on a specific surface.
  2. Stylistic neutrality. DLSS 5 is trained on photorealistic data. It pulls everything towards photorealism. Stylised games (cel-shading, noir, deliberately dark palettes) risk being "corrected" towards a uniform realistic render. Masking can exclude an object from processing, but it cannot tell the AI "process this, but preserve my noir style."
  3. A guarantee of consistency across versions. There is no specification guaranteeing that specific patterns will be processed identically after a model update.
  4. A complete artistic workflow. There is technical integration (SDK, viewport preview, QA testing). But there is no methodology — a systematic approach for "how to design a visual knowing it will pass through neural rendering." Artists still create assets for a traditional pipeline and then check whether AI has broken the result.

Part 5. The Parallel: From Motion Interpolation to DLSS 5

DLSS 5 OFF vs ON — Grace Ashcroft, Resident Evil Requiem
The most discussed DLSS 5 comparison: Grace Ashcroft from Resident Evil Requiem. Left — original, right — after neural rendering. Source: NVIDIA / Know Your Meme
TV Motion Interpolation DLSS 3/4 Frame Gen DLSS 5 Neural Rendering
What it generatesIntermediate framesIntermediate framesThe entire image
Input data2D pixels (optical flow)Motion vectors + depth bufferColour + motion vectors + scene semantics
What it breaksRhythm, blur, editingLatency, UI/HUDLighting, materials, faces, style
Scale of interventionAdds framesAdds frames + pixelsRewrites ~95% of pixels
Author's controlNone (Filmmaker Mode = turn it off)MinimalMasking, intensity, color grading
Industry responseNolan, Scorsese, Tom Cruise againstDebate about latencyMass backlash, polarised reaction
Manufacturer's position"Improves the picture""More fps""The GPT moment for graphics"

The evolution is singular: an increasingly aggressive AI intermediary stands between author and viewer. The TV added frames — it was uncomfortable, but reversible (turn off the setting). DLSS 5 rewrites the image itself — and although it too can be disabled, market pressure (NVIDIA pushing it, publishers supporting it, RTX 50 exclusivity) creates an environment in which "turning it off" becomes an ever less default choice.

DLSS 5 OFF vs DLSS 5 ON — Gollum and Andy Serkis meme
DLSS 5 OFF — Gollum, DLSS 5 ON — Andy Serkis. Source: Know Your Meme
DLSS 5 — yassify filter meme
"DLSS 5 ON" — characters looking like they've been run through a "yassify filter." Source: Know Your Meme / PC Gamer

The meme featuring Dorothea Lange's photograph "Migrant Mother" — a desperate woman transformed by AI into a smiling, made-up one, captioned "Nvidia presents DLSS 5" — is an exact metaphor: the technology doesn't improve — it reinterprets, substituting the author's intent with its own idea of what is "better."

As PCWorld put it: "Games are art, and art has purpose. If the GPU simply generates AI-generated content that neither the user nor developer asked for, doesn't that detract from the experience?"

When I finally found and turned off that setting, Taxi Driver came back to where it belonged. Artists received constraints — frame rate, film format — and mastered them. They knew what the viewer would see. Reinterpreting someone else's work can be art — Tarantino reinterprets and creates a second, independent work. But only a handful of people are capable of that, and the result is always a different work, not an improved version of the original. You cannot mass-produce that. DLSS 5 is trying to. To call it an improvement is to not understand what an original is.


Sources

Contact Us
[email protected]
Or just type here and click 'Send':