-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Problem
When running AI agent workloads inside smolvm microVMs, there's no way to enforce network access controls that can't be bypassed from inside the VM.
The libkrun kernel ships without netfilter modules, so iptables/nftables are unavailable. Other kernel-level mechanisms (Landlock TCP connect, cgroup BPF SOCK_ADDR, seccomp destination filtering) are also unavailable or insufficient in the libkrun kernel. This leaves only userspace enforcement (LD_PRELOAD, Node.js hooks, proxy env vars), all of which can be bypassed by a sufficiently determined process inside the VM — for example, via raw syscalls through Python's ctypes.
The only place to enforce network policy that is truly immune to userspace bypass is at the VMM level, where libkrun's TSI (Transparent Socket Impersonation) processes connect() syscalls on the host side.
Proposed Solution
Add a per-VM network policy configuration that filters outbound connections at the VMM/TSI layer, before the host processes the connection.
Option A: Domain allowlist (preferred)
Since TSI already resolves hostnames during connect(), it could check the destination against an allowlist before completing the connection:
# At VM creation time
smolvm microvm create my-vm --net \
--allow-domain api.anthropic.com \
--allow-domain github.com \
--allow-domain "*.npmjs.org"
# Or via a config file
smolvm microvm create my-vm --net --network-policy ./allowed-domains.txtConnections to non-allowed destinations would return EACCES or ECONNREFUSED to the guest.
Option B: IP/CIDR allowlist
A simpler alternative that filters at the IP level after DNS resolution:
smolvm microvm create my-vm --net \
--allow-ip 127.0.0.0/8 \
--allow-ip 104.18.0.0/16Option C: Block-all with localhost-only exception
The simplest version — just a flag to restrict all outbound to localhost only, forcing traffic through a proxy running inside the VM:
smolvm microvm create my-vm --net --outbound-localhost-onlyThis would be sufficient for the proxy-based allowlist pattern, where a proxy on 127.0.0.1:8888 inside the VM is the sole authorized exit point.
Why This Matters
smolvm is well-suited for running AI coding agents (Claude Code, etc.) in sandboxed environments. A key safety requirement is restricting which external services the agent can reach. Today this requires fragile userspace workarounds:
- LD_PRELOAD hooks — bypassed by static binaries or raw syscalls
- Proxy env vars — bypassed by unsetting them
- Node.js
--requirehooks — only covers Node.js processes - DNS filtering — impossible in libkrun (no UDP, DNS is transparent via TSI)
VMM-level enforcement would make network policy as robust as filesystem isolation — a foundational security boundary rather than a best-effort filter.
Kernel-Level Alternatives Tested (All Failed)
For context, here's what we tested inside the VM before concluding VMM-level is the only viable path:
| Mechanism | Result |
|---|---|
iptables / netfilter |
No kernel modules available |
| Landlock (v4 TCP connect) | ENOSYS — not compiled in |
BPF CGROUP_SOCK_ADDR |
Not available |
BPF CGROUP_SKB |
Loads but can't attach to cgroups |
| seccomp-bpf | Available but can't dereference sockaddr pointers to filter by destination |
| Network namespaces | TSI bypasses namespace isolation |
/etc/ld.so.preload |
musl doesn't support it |
Implementation Notes
The natural place for this filtering is in libkrun's TSI layer, specifically where it handles connect() syscalls from the guest. The TSI already sees the destination address/hostname at this point — the filter would be an additional check before proceeding with the host-side connection.
A configuration file or CLI flags on smolvm microvm create could pass the policy down to libkrun at VM creation time.