February 13, 2026
Claws out, site down
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills
New DIY AI drops, site hiccups, costs worry users, and OpenClaw fans ask what's different
TLDR: Moltis is a new open-source AI assistant you can run yourself, promising memory, plugins, and self-made skills. The launch sparked “Claw Wars” with OpenClaw comparisons, site-down snark, worries about token costs and security for non‑tech users, and confusion over available models—exciting, but very much early-days alpha energy.
Meet Moltis, a do‑it‑yourself AI assistant you can run at home or in the cloud—one download, a web page, and boom: a chatty helper with memory, plugins, and even “self‑extending” skills that it can invent mid‑conversation. That’s the pitch. The comments? Pure popcorn.
The thread instantly turned into Claw Wars: fans of rival project OpenClaw asked what actually sets Moltis apart. One commenter begged for clarity—are they basically the same thing? Another praised Moltis’s “single binary” simplicity and local models (you can run it offline), while warning that for non‑tech folks, security and tool permissions are a minefield. And the budget talk got loud fast—without a flat subscription for AI usage, token bills could balloon.
Then came drama fuel: a user deadpanned, “moltis.org is down fwiw,” as if the launch literally face‑planted on cue. Meanwhile, a veteran of OpenClaw complained about chats getting messy as conversations are “compacted” to save tokens—hoping Moltis delivers better continuity so your AI doesn’t forget mid‑task. Others tripped over model lists (“Why do I only see certain models?”), stirring confusion about access versus defaults.
Despite the squabbles, the crowd loved the Rust build, the local‑first vibe, and GitHub transparency. Verdict: exciting tool, alpha vibes—bring patches, patience, and a spending cap.
Key Points
- •Moltis is an open-source, Rust-based personal AI assistant distributed as a single self-contained binary.
- •It supports local and cloud LLMs, with streaming-first responses, multi-channel access (Web UI, Telegram, API, PWA), and voice via multiple TTS/STT providers.
- •Long-term memory uses hybrid vector and full-text search with local embeddings (GGUF), caching, and session export.
- •Security features include HTTPS by default, passkeys (WebAuthn), scoped API keys, human-in-the-loop approvals, origin validation (CSWSH), and Docker-based sandboxing with SSRF protections.
- •Extensibility and operations include Skills/Hooks and MCP tool servers, cron scheduling, TOML config, provider metrics, Prometheus/OpenTelemetry observability, and SQLite persistence; the project is MIT-licensed and labeled alpha.