January 5, 2026
Cage match: Python vs your files
Sandboxing Untrusted Python
Internet erupts over how to lock up rogue Python code
TLDR: Python can't safely run unknown code, so devs push for isolating AI agents at the system level. Comments flare: Docker isn't a real sandbox, docs are fuzzy on WebAssembly, and built‑ins can be hijacked—making isolation and least‑privilege the must‑have safety net.
Python’s own makers basically admit it: you can’t safely run strangers’ code inside regular Python. It can peek into itself and undo your locks. The article insists the real fix is isolation—lock agents to one folder, allow only approved websites, give read‑only keys, and run in a sandbox.
Cue the comments cage fight. One camp shouts: “Docker isn’t a sandbox!” with petters smacking down the idea that a container is a safe jail. Another camp just wants clarity: ptspts asks how this even works—WebAssembly (WASM), a browser-like virtual machine? A Python interpreter compiled to it? Meanwhile, maxloh waves a demo: even if you clamp down built‑ins, “you could hijack them,” complete with a cheeky map() tattletale.
Then come the alternatives and plugs. incognito124 drops Judge0, a hosted sandbox. bArray isn’t sold on WASM, eyeing quick‑boot virtual machines like QEMU instead. Memes fly: “Put Python in a terrarium”, “WASM wizards vs VM vikings,” and “Do not feed AI after midnight.”
Under the jokes is real fear: as AI agents browse the web, prompt injection—sneaky hidden instructions—can trick them into reading your secrets. The crowd’s verdict? You can’t prompt-engineer your way out—build stronger cages. Least‑privilege rules are the new seatbelts. Online.
Key Points
- •Python’s introspective, mutable runtime allows bypassing language-level sandbox restrictions on untrusted code.
- •Removing dangerous built-ins (e.g., eval, __import__) can be circumvented via object graphs, frames, and tracebacks.
- •OS-level isolation (e.g., Docker or VMs) is more reliable than attempting to sandbox Python within the language.
- •AI agents and LLMs introduce security risks like prompt injection, increasing the need for isolation and least privilege.
- •A layered isolation strategy is proposed: filesystem, network allowlists, credential scoping, and runtime sandboxing to protect resources.