April 15, 2026

Let your Mac hustle while you sleep

Darkbloom – Private inference on idle Macs

Airbnb for AI: Get paid for your sleepy Mac — fans cheer, skeptics say the math is sus

TLDR: Darkbloom wants to rent idle Apple Silicon Macs as private, cheaper AI servers via an OpenAI-style API. The community is split between excitement over easy earnings and price drops, and skepticism about Mac-only lock-in, privacy guarantees, and “too-good-to-be-true” profits.

Darkbloom dropped a bomb: turn your idle Apple Silicon Mac into an AI side hustle, with an OpenAI-compatible API and “up to 70% cheaper” inference that’s end-to-end encrypted. The crowd immediately split into camps. The hype crew called it the “Airbnb for compute,” cheering the idea of everyday laptops earning cash while you sleep, and loving that devs can just plug in via a familiar OpenAI-style API.

But the skeptics brought popcorn. The hottest fight? Money. One commenter did the napkin math and cried foul: if you can pay off a Mac mini in months and bank $1–2k a month after, why isn’t Darkbloom just buying all the Macs themselves? Cue a thread of “too good to be true” debates. Privacy was the other cage match: supporters point to Apple’s secure hardware (a “trusted enclave,” basically a locked box on the chip) and Darkbloom’s claim that operators can’t see your data. Critics shot back with “okay, but can the host still peek at memory?” and asked for real-world proof, not just a whitepaper.

Then came the Apple drama. “Why only Macs?” sparked a chorus of open-hardware fans demanding PCs and phones. Others shrugged, “Apple wins again,” arguing the Neural Engine and tight security make Mac-first inevitable. Bonus meme: multiple users pitched better names — “Inferanet,” anyone?

Key Points

  • Eigen Labs introduced Darkbloom, a decentralized inference network that uses idle Apple Silicon Macs to serve AI inference.
  • The service offers an OpenAI‑compatible API for chat, image generation, and speech‑to‑text with end‑to‑end encryption.
  • Darkbloom claims up to 70% lower costs than centralized alternatives and positions inference at roughly half the cost.
  • For hardware owners, it advertises earnings with electricity costs of $0.01–$0.03/hour and states operators keep 95–100% of revenue.
  • The project critiques the current GPU→hyperscaler→API supply chain and argues verifiable privacy is essential, asserting operators cannot observe inference data via four independently verifiable protections.

Hottest takes

"Why only Macs?" — DeathArrow
"why wouldn’t their business model just be buying mac minis?" — kennywinker
"So Apple won in some strange way again?" — chaoz_
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.