January 7, 2026

Pixels, paths, and petty burns

Vector graphics on GPU

Is this fresh magic or 2011 déjà vu? Commenters roast, reminisce, and demand quality

TLDR: A developer argues vector graphics should be drawn on the GPU with a simple per‑pixel method. Commenters clap back that it’s outdated, low‑quality for fonts, and already solved by past tech, while others demand affordable “Slug‑level” tools—making this a spicy split between nostalgia and practicality.

A dev says our screens should draw shapes and letters using the graphics chip (the GPU) instead of the main brain (the CPU), pitching a per‑pixel strategy and an analogy of “14,000 tiny people reading a book.” The crowd? Absolutely not quiet. One camp is roasting the idea as old news: badlibrarian calls it “the worst of both worlds” and points to NV Path Rendering from 2011, while masswerk wants “(2022)” slapped on the title like a warning label. Meanwhile, larodi just wants something with “Slug‑level” power that isn’t pricey, sparking jokes about mythical vector unicorns.

Quality alarms ring loud: virtualritz says this is basically “box filtering” — fine for quick shapes, not for crisp fonts or high-end visuals. Folks reminisce about Apple’s Quartz 2D dream and drop a 13‑year‑old thread for context here. The vibe: half the commenters yell we’ve done this, the other half plead do it right and make it affordable. There’s snark about “Greek letter alpha as the accumulator” and distrust of graphics drivers, plus a small chorus asking: if GPUs are so fast, why does high-quality text still look tricky? Drama level: high. Consensus: contested.

Key Points

  • The article advocates moving vector shape and text rasterization from CPU to GPU.
  • It explains a basic rasterization method using a winding number per pixel row with a horizontal ray to detect segment intersections.
  • Under the non-zero fill rule, pixels with non-zero winding numbers are filled, regardless of path direction.
  • The method generalizes to complex vector paths, including Bézier paths, though anti-aliasing is not covered.
  • To leverage GPU parallelism, the article proposes per-pixel rasterization, structuring data for many small, independent tasks (e.g., as illustrated by the Apple M1 GPU's thread capacity).

Hottest takes

"Really, inst there anything which comes Slug-level of capabilities and is not super expensive?" — larodi
"Author uses a lot of odd, confusing terminology and brings CPU baggage to the GPU creating the worst of both worlds." — badlibrarian
"Unless I miss something I think that this describes box filtering." — virtualritz
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.