February 6, 2026

Pixels, politics, and PS1 magic

How virtual textures work

Crash’s texture magic blows minds — and ignites an art vs. tech smackdown

TLDR: A dev explains how Crash Bandicoot streamed only the parts of textures players could see, a trick that still powers giant worlds today. Comments erupted as engineers warned about modern overhead, artists said the real cost is content creation, and others pushed to retire the old Lenna demo image.

A retro dev tale just crashed the internet’s chill. In a new deep dive, Crash Bandicoot’s Andy Gavin reveals how the team squeezed huge, detailed worlds onto the tiny original PlayStation by loading only what the player could see — like streaming the stage while you’re on it. The tech crowd cheered the wizardry, saying this “only stream what’s visible” trick still powers today’s massive worlds and even bioimaging. But then the comments went full arena.

One camp shouted that modern versions come with hidden bills. As one put it, you end up feeding the graphics chip (GPU) more than you actually render. Another line that had folks spitting coffee: “Is this AI?” Meanwhile, a surprise subplot erupted when a commenter demanded we retire the classic “Lenna” test image — the famous photo long used in imaging demos — sparking a culture-war side thread that rivaled the tech talk link.

And then the artists showed up. They argued the real “cost” wasn’t the graphics card or video memory (VRAM), but humans painting endless unique details once the tech removed limits. Cue memes about the PS1’s “2MB RAM being smaller than a selfie” and quips like “virtual textures, virtual patience.” Verdict: genius tech, real-world drama — and Crash is still causing chaos decades later link.

Key Points

  • Crash Bandicoot’s developers decomposed levels into fixed-size pages and used a precomputed visibility layout to stream only needed data on the original PlayStation.
  • This approach shifted constraints from total RAM capacity to page size, making effective level size tied to disk space, not main memory.
  • Modern GPUs remain limited by how much data they can efficiently access at any moment, a constraint echoed in scientific visualization workflows.
  • Virtual texturing addresses large textures by converting them into mip chains split into pages and streaming only those pages that project onto the screen.
  • 2D systems like Google Maps and Microsoft Deep Zoom have straightforward paging runtimes, while 3D rendering requires sampling multiple mip levels simultaneously and per-pixel LOD.

Hottest takes

"You spend more time feeding the GPU than rendering" — direwolf20
"Retire the Lenna image" — JayGuerette
"Artists had to fill an endless world" — socalgal2
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.