Show HN: Virtual SLURM HPC cluster in a Docker Compose

Docker spins up a 'mini supercomputer' — HN split on toy vs real deal

TLDR: A Docker setup spins up a mini HPC cluster with job scheduling and parallel computing built in. The community loves it for testing and teaching, debates if it’s production-ready, and points to cloud alternatives—making it a flashy sandbox that might grow into more.

A tiny “supercomputer” you can launch with one command? That’s the promise of vHPC, a Docker-based setup that spins up a High Performance Computing (HPC) cluster with SLURM (a job scheduler) and MPI (tech for running tasks in parallel). The crowd went wild—then immediately split into camps. The “production or playground?” debate led the charge, with one user asking if this is dev-only or if it can scale to the real world, name-dropping OpenOnDemand like a referee stepping into the ring. Another cheered the end of “ten-year-old automation scripts,” dunking on the old-school pile of Ansible/Chef/Puppet as if it were a museum exhibit.

Fans loved the SSH-first workflow—exactly what many researchers use with vim, tmux, and live log tailing—and one wistful commenter wished they’d had this during their master’s thesis. Meanwhile, the “cloud folks” dropped links to Magic Castle and AWS ParallelCluster, suggesting vHPC is joining a lively ecosystem rather than inventing the wheel. Security warnings (auto-generated keys, localhost-only access) became meme fuel: “not for production, unless you like surprises.”

Verdict from the peanut gallery: great for learning, testing, and rapid prototyping. Whether it’s the next big thing or just the nicest sandbox depends on your appetite for Dockerized science and how much you trust a cluster that fits in a compose file.

Key Points

  • vHPC provides a Docker Compose–based virtual HPC cluster running SLURM with OpenMPI on Rocky Linux 9.
  • Default setup includes one login node, two worker nodes (each 4 vCPU, 2048 MB RAM), and an optional MariaDB node for full job accounting.
  • User management and SLURM configuration are shared via mounted volumes; users are synchronized from head to workers.
  • Cluster access is via SSH with auto-generated keys and localhost port bindings, with password authentication as fallback.
  • Runtime customization allows installing extra packages at container startup without rebuilding images; example SLURM/MPI commands are provided.

Hottest takes

“can I really just make an arbitrarily large production HPC cluster with it?” — ZeroCool2u
“a gigantic broken pile of ansible/chef/puppet that hasn’t been touched in 10 years” — robot-wrangler
“Can you access with ssh?” — janeway
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.