Open Source Implementation of Apple's Private Compute Cloud

Apple-style private AI goes open—cheers, nitpicks, and a rival crash the party

TLDR: OpenPCC offers an open-source way to run AI privately on your own machines, inspired by Apple’s system. Commenters cheer the openness but argue over what “implementation” means, while skeptics note the model host can still see your text; a rival’s plug stokes trust-versus-verify drama.

OpenPCC just dropped an open-source take on Apple’s Private Cloud Compute, promising private AI on your own servers with encrypted streams, hardware proving it’s legit, and requests that can’t be traced back to you. Fans are hyped: kiwicopple gushes about it being properly open-source and even wonders if it could help hide sensitive wearable logs. Cue the drama.

One camp flexes the “it’s an implementation, so it just has to behave the same” angle, while another pounces on semantics. kreetx fires back that a re-implementation doesn’t need to be in the same programming language—translation: stop gatekeeping and let the code speak.

Then the skeptics show up. ryanMVP reads the whitepaper and points out a spicy detail: the company running the model can still see your text. Everyone else—routers, identity brokers—gets blocked, but the model host isn’t blind. The crowd wants a crystal-clear list of what’s truly hidden and who can peek.

And, because it’s the internet, a competitor pops in: derpsteb plugs privatemode.ai and their "we do this too" stack. The thread devolves into meme law—“trust, but verify”, “open-source vs source-available”—with jokes about Apple asking for a pull request and privacy theater vs privacy for real.

Key Points

  • OpenPCC is an open-source framework for provably private AI inference inspired by Apple’s Private Cloud Compute.
  • Privacy is enforced through encrypted streaming, hardware attestation, and unlinkable requests, with a goal of community-governed transparency.
  • A whitepaper is available on GitHub, and a client repository provides a Go client and a C library for Python and JavaScript clients.
  • The usage example demonstrates sending OpenAI API-format inference requests and routing to specific models via request headers.
  • Development workflows use the Mage tool, with in-memory services available for local testing via runMemServices and runClient commands.

Hottest takes

"impressive work jmo - thanks for open sourcing this (and OSI-compliant)" — kiwicopple
"the inference provider still has the ability to access the prompt and response plaintext" — ryanMVP
"If someone re-implements or reverses a service then it doesn't need to be in the same language" — kreetx
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.