April 19, 2026
3D on Mac, tea in comments
Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed
Macs get 3D magic—fans cheer, skeptics say it’s slow and “useless”
TLDR: A new port runs Microsoft’s TRELLIS.2 image-to-3D tool natively on Apple Silicon—no Nvidia required—but it’s slower and missing textures. Commenters are split: some applaud the Mac win, others call the results weak, gripe about missing demos, and argue that speed and quality still favor existing tools.
Apple fans are popping confetti while skeptics reach for the red pen. A dev just dropped a port of Microsoft’s TRELLIS.2 that turns a single photo into a 3D model—now running natively on Apple Silicon, no Nvidia needed. It reportedly spits out chunky 400K+ vertex meshes in around 3.5 minutes on an M4 Pro and saves ready-to-use files, all from a simple script. The repo is up on GitHub, but the comments? That’s where the show starts.
One camp is shrugging, with villgax reminding everyone this was always possible with Apple’s GPU backend—Hugging Face just doesn’t host it, and running on Mac can be “10x worse” speed-wise. Another camp is throwing shade at quality: gondar says the model “is not very good,” even calling TRELLIS “useless tier,” while hyping the closed service at meshy.ai. Others nitpicked presentation, with kennyloginz mocking the lack of demo images: “So much effort, but no examples.”
Meanwhile, boosters offered a clean “Well done,” and a few Mac loyalists celebrated ditching the Nvidia tax. The fine print—no baked textures, possible mesh holes, hefty memory needs—fueled the drama. Verdict from the crowd: cool milestone for Mac users, but the duel over speed vs. accessibility and open-source vs. quality is the real headline.
Key Points
- •Port of Microsoft’s TRELLIS.2 image-to-3D model to Apple Silicon using PyTorch MPS enables native macOS inference without NVIDIA GPUs.
- •Generates 400K+ vertex meshes from a single image in about 3.5 minutes on an M4 Pro, peaking at ~18GB memory.
- •CUDA-only dependencies are replaced: flex_gemm → pure-PyTorch sparse 3D conv; o_voxel._C → Python mesh extraction; flash_attn → PyTorch SDPA; cumesh and nvdiffrast stubbed.
- •Setup requires macOS on Apple Silicon (M1+), Python 3.11+, ~24GB unified memory, ~15GB for weights, and gated access to facebook/dinov3 and briaai/RMBG-2.0 via Hugging Face.
- •Limitations: no texture export, hole filling disabled, ~10x slower than CUDA (sparse conv bottleneck), and inference-only (no training).