October 31, 2025

Laptop vs Cloud: pick your fighter

Ask HN: Who uses open LLMs and coding assistants locally? Share setup and laptop

Can your laptop pull off AI coding or do you need a mini data center

TLDR: Hackers debated if open-source AI coding tools can really run well on laptops. Some insist only servers or the cloud deliver; others say a 128GB MacBook works surprisingly well, while minimalists stick to browser summaries—splitting the community between tinkering for control and just getting work done.

Andrea asked the Hacker News crowd how folks run open-source AI coding helpers locally—on real laptops, not corporate cloud. Cue fireworks. The loudest chorus says: laptops just don’t have the muscle. One commenter runs a home server with big graphics chips and swaps models on the fly, while another jokes local rigs that truly “feel good” are closer to carry-on luggage than notebooks. Tools namedropped include Ollama, LM Studio, Aider, and llama.cpp, with models like “gpt-oss-120b.” If acronyms make you blink: LLM means “large language model,” and GPU is a graphics chip that powers AI.

Then came the twist: a 128GB RAM MacBook Pro running gpt-oss-120b was called “shockingly usable,” with one user saying they’d use it instead of OpenAI’s website. The Ollama app even adds web search so it can fetch fresh info—like giving your offline robot a quick trip to Google. Cloud loyalists pushed back hard: “no local setup touches the cloud,” declaring that quarter-million-dollar data center machines crush anything you can put on a desk.

Comic relief arrived from minimalists who just click the browser’s auto-summary and move on, and from the eternal “non‑Nvidia driver” saga. The vibe? A spicy split between tinkerers who want control and pragmatists who want results—MacBook Maximals vs Mega-Server Monarchs, with a side of power envy and backpack-data-center memes.

Key Points

  • The article seeks real-world workflows for using open-source LLMs and coding assistants locally on laptops.
  • It asks which local models/runtimes (e.g., Ollama, LM Studio) and assistant integrations (e.g., VS Code plugins) are used.
  • It requests detailed laptop hardware specifications and operating systems, plus performance observations.
  • It seeks information on tasks handled (code completion, refactoring, debugging, code review) and reliability.
  • The author is conducting an investigation and intends to share findings once complete.

Hottest takes

"I sometimes still code with a local LLM but can't imagine doing it on a laptop" — lreeves
"we set up gpt-oss-120b on a 128GB RAM Macbook pro and it is shockingly usable" — juujian
"for productive coding use no local LLM approaches cloud and it's not even close" — sho
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.