What an unprocessed photo looks like

A raw Christmas tree photo triggered a “real vs fake” showdown

TLDR: A blogger showed how a camera’s dull raw data becomes a bright, colorful Christmas tree photo through color and brightness tweaks. Commenters argued over the example choice, debated screen limitations and gamma curves, and warned about AI “hallucinations”—raising big questions about what counts as a real photo and why that trust matters.

A blogger on Maurycy’s site dropped a step‑by‑step reveal of what a camera truly “sees” before your holiday pic gets pretty: flat gray sensor data, color guessed from a filter grid, and a brightness curve to stop the image from looking way too dark. The community? Instant fireworks. One camp cheered the nerdy deep‑dive; another slammed the choice of a dim, multicolored Christmas tree as “a bad example,” arguing it muddied the “ground truth” of what the final photo should look like. Cue the Gamma Gang vs the Monitor Mafia: skeptics asked whether bright, fancy screens with more shades could just show the image “as-is,” no brightness tricks at all. Meanwhile, the party took a sharp turn when someone warned that camera makers are sneaking in AI that hallucinates what photos “should” look like, turning “processing” into “pretending.” The meme machine kicked in: commenters joked about the “green Grinch” cast thanks to extra green pixels, while others declared modern photography is “marketing with math.” Love it or loathe it, the thread’s vibe is clear: people want to know where reality stops and software starts—and whether your festive tree is a photo or a filter fever dream.

Key Points

  • Raw camera data from a 14-bit ADC appears gray and low-contrast; effective black and white points must be set using the histogram and remapping formula.
  • Color information is reconstructed from a Bayer filter using demosaicing; a simple neighbor-averaging method yields a usable color image.
  • Linear sensor data looks too dark on displays due to limited dynamic range and nonlinear human brightness perception; sRGB encodes more detail in darker tones.
  • Applying a nonlinear curve can introduce a green cast because sensors and RGGB CFAs emphasize green; naive demosaicing can exacerbate it.
  • Proper white balance must be performed on linear data by scaling each channel, after which a nonlinear curve can be applied for perceptual brightness.

Hottest takes

“pity the author chose such a poor example for the explanation” — throw310822
“use AI to recognise objects and hallucinate what they ‘should’ look like” — userbinator
“modern photography is just signal processing with better marketing” — barishnamazov
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.