I run multiple $10K MRR companies on a $20/month tech stack

Bootstrapped boss says $20 runs his companies — internet splits over potato servers

TLDR: A founder claims he runs multiple $10k-per-month businesses on a $20 setup using a cheap server and a home GPU for AI. Comments explode into a brawl over “potato” frugality vs. reliability, Go vs. Python, and a swapfile PSA, with bonus requests for coding agents and a quick MRR definition

Pitch-night drama alert: a founder bragged he runs multiple $10k‑a‑month businesses on just $20/month, gets told by investors, “what do you even need funding for,” and the comments went feral. He skips big cloud bills, rents a cheap server (a VPS = a low-cost rented computer), writes backends in Go, and even runs AI at home on a used RTX 3090. He name-drops past tools like websequencediagrams.com and eh-trade.ca, then basically says: keep it simple, keep it cheap.

The crowd immediately split. One camp cheered the frugal hustle, chanting “VCs hate him” and meme-ing his “potato server” handling “10,000s of requests.” The other camp shouted “please don’t run your business on pocket change,” worried about reliability and the day the site actually goes viral. The nerdiest (and loudest) fight: Go vs. Python. One engineer barked that Python is fine, and threw shade at using SQLite in production. Another went full sysadmin and declared you should “always use a swap file.” Meanwhile, the AI hype squad wanted a coding agent like Claude in the mix, and one helpful soul translated the jargon: MRR = monthly recurring revenue.

Between Ollama and vLLM, “dusty GPU prints tokens at home” became the day’s mood. Love it or hate it, the $20 stack sparked a full-on internet brawl

Key Points

  • The author claims to run multiple $10K MRR businesses on an ultra-lean stack costing about $20/month.
  • He recommends a single low-cost VPS (e.g., Linode or DigitalOcean at $5–$10/month) over complex AWS setups that can cost ~$300/month before users.
  • For backends, he advocates Go for performance and low memory usage, deploying as a static binary via scp.
  • He suggests using local AI for batch tasks, running vLLM on a used RTX 3090 GPU to avoid recurring API fees.
  • He proposes starting local AI experimentation with Ollama (e.g., running models like qwen3:32b) to iterate quickly on prompts.

Hottest takes

"You should always use a swap file/partition" — hackingonempty
"Python is completely fine for the backend" — codemog
"missing the Claude Code or Coding Agent part imo" — komat
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.