About memory pressure, lock contention, and Data-oriented Design

From frozen chats to lightning speed—now it’s a “cache it vs. design it” brawl

TLDR: A Matrix dev says a frozen chat list got dramatically faster with new data layout and smarter updates. Commenters cheer the speedup but argue the real hero is caching work on updates instead of during every sort—an important lesson for anyone who wants snappier apps without deep rewrites.

A Rust dev working on the Matrix chat app’s Room List says they crushed a long‑running “frozen list” bug and made it scream—think 98.7% faster and a wild 7,718% throughput boost. In the post here, they credit data‑oriented design—basically laying out info so computers can grab it faster—and untangling fights over shared resources (aka too many parts grabbing the same key at once). Cue the comment section turning into an after‑party debate.

The loudest take? “It’s not magic, it’s moving the heavy work off the hot path.” One top‑liked commenter argues the real win is caching the stuff you sort and filter, so the slow, fussy steps happen when data changes—not every time you compare two items. Translation: do the chores once, not every second. Fans of the “just cache it, bro” school threw confetti, while data‑oriented devotees clapped back that reorganizing the data is what makes that caching actually work. Jokes flew about “sorting in the middle of a stampede,” and someone christened the bug “Locktopus Prime.” Despite the drama, everyone agrees on one thing: the Room List went from molasses to microwave. The real fight is over the headline credit—data‑oriented design or smart caching—and the internet wants receipts.

Key Points

  • The article addresses a performance issue in the Matrix Rust SDK’s Room List and introduces Data-oriented Design as part of the solution.
  • The Room List emits a reactive stream of diffs (Stream<Item = Vec<VectorDiff<Room>>>) rather than storing rooms directly.
  • VectorDiff, from the eyeball-im crate, represents granular changes (e.g., Set, Remove, PushFront) to an ObservableVector.
  • A room’s preview update and recency change trigger a sequence of diffs to reposition it at the top of the list.
  • The author reports performance gains: 98.7% reduction in execution time and a 7718.5% increase in throughput, with VectorDiff<Room> sized at 72 bytes.

Hottest takes

“moving expensive work off the hot path … cache the sort/filter inputs” — pastescreenshot
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.