1. Introduction to srv
srv is a self-hosted control-plane service for creating Firecracker microVMs over SSH on a Tailscale tailnet.
It exposes an SSH command surface on one Linux host and manages Firecracker microVMs behind it. You create, inspect, stop, start, resize, back up, and delete VMs the same way you would run a remote command — through ssh srv <command>.
ssh srv new demo# demo created — state: provisioning# inspect: ssh srv inspect demo# connect: ssh root@demo
1.1 Where srv fits
srv is useful for both short-lived sandboxes and persistent isolated services.
- Throwaway debug VMs — spin up an isolated environment, break things, and delete it without affecting the host
- Sandboxed agent VMs — give AI coding agents their own cgroup-limited VM with per-instance Tailscale identity and a scoped Zen API proxy
- Dev/test environments — fast reflink-based clones from a single base image, with backup/restore for instant reset
- Isolated workloads — run services in separate microVMs with per-VM networking, auth, and resource limits
All VMs get a full Linux system with systemd, a real kernel, and their own Tailscale identity. Where containers are not enough — package managers, services that need root, bespoke networking configurations, predictable boot and teardown — srv gives you real isolated machines in seconds.
1.2 Conceptual architecture
Tailscale tailnet │ ┌─────┴─────┐ │ tsnet │ (joins tailnet as "srv") │ :22/tcp │ (SSH API surface) └─────┬─────┘ │ ┌─────────────┼─────────────┐ │ │ │ ┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐ │ VM: demo │ │ VM: ci │ │ VM: test │ │ /30 net │ │ /30 net │ │ /30 net │ │ cgroup │ │ cgroup │ │ cgroup │ │ TAP + NAT │ │ TAP + NAT │ │ TAP + NAT │ └─────────────┘ └─────────────┘ └─────────────┘
Key components:
- tsnet — joins the tailnet as
srvand exposes the control API on tailnet TCP port 22 - gliderlabs/ssh — handles
execrequests and rejects shell sessions - SQLite — stores instances, events, command audits, and authorization decisions
- Reflinks — clone the base rootfs for fast per-instance writable disks
- Network helper — root-only process owns TAP creation, iptables MASQUERADE, and FORWARD rules
- VM runner — root-owned process invokes Firecracker through the official jailer, drops to
srv-vm:srv, and places each VM into its own cgroup v2 leaf - MMDS — one-off Tailscale auth keys are injected through Firecracker metadata so guests self-bootstrap
- Zen gateway — per-instance HTTP proxy on the guest's gateway IP forwards to OpenCode Zen with the host key
- HTTP integrations — admin-defined host-side HTTP proxies inject headers or auth for selected guests without storing raw secrets inside the VM
1.3 Where you can run it
srv runs on Linux with the following requirements:
- Linux host with cgroup v2 and
/dev/kvm - IPv4 forwarding enabled (
net.ipv4.ip_forward=1) SRV_DATA_DIRon a reflink-capable filesystem (btrfsor reflink-enabledxfs)- Tailscale installed and working on the host
- Tailscale OAuth client credentials with permission to mint auth keys for guest tags
ip,iptables,cp, andresize2fsavailable on the host- Official static Firecracker and jailer release pair
1.4 Next steps
2. Installation
2.1 System requirements
- Linux host with cgroup v2 and
/dev/kvm - IPv4 forwarding enabled:
net.ipv4.ip_forward=1 SRV_DATA_DIRon a reflink-capable filesystem (btrfsor reflink-enabledxfs), withSRV_BASE_ROOTFSon the same filesystem- Tailscale installed and working on the host
- Tailscale OAuth client credentials (
TS_CLIENT_IDandTS_CLIENT_SECRET) with permission to mint auth keys for the configured guest tags ip,iptables,cp, andresize2fsavailable on the host- Official static Firecracker and jailer release pair (the installer can download these for you)
srv is intentionally a single-host control plane. Clustering, multi-host scheduling, and high availability are out of scope for the current phase.
2.2 Build
Build the three binaries from source:
go build ./cmd/srvgo build ./cmd/srv-net-helpergo build ./cmd/srv-vm-runner
Or install them to a standard location with the provided installer (see below).
2.3 Build the guest image
The Arch base image builder produces the vmlinux kernel and rootfs-base.img rootfs that srv provisions from.
On an Arch Linux host:
sudo OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh
On a non-Arch host, use the podman workflow:
sudo podman run --rm --privileged --network host \ -v "$PWD":/work \ -v /var/lib/srv/images/arch-base:/var/lib/srv/images/arch-base \ -w /work \ docker.io/library/archlinux:latest \ bash -lc ' set -euo pipefail pacman -Sy --noconfirm archlinux-keyring pacman -Syu --noconfirm arch-install-scripts base-devel bc e2fsprogs rsync curl systemd OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh '
See Building a custom guest image and the guest image reference for details on what the image includes and how to customize it.
2.4 Install and configure
The systemd installer handles binary installation, unit setup, and downloading the official static Firecracker/jailer pair:
sudo ./contrib/systemd/install.sh
Enable IPv4 forwarding:
sudo tee /etc/sysctl.d/90-srv-ip-forward.conf >/dev/null <<'EOF'net.ipv4.ip_forward = 1EOFsudo sysctl --system
Write the environment file. At minimum you need Tailscale credentials and guest artifact paths:
sudoedit /etc/srv/srv.env
Key entries to fill in:
TS_AUTHKEY=tskey-auth-xxxxxxxxxxxxxxxxTS_CLIENT_ID=your-oauth-client-idTS_CLIENT_SECRET=your-oauth-client-secretTS_TAILNET=your-tailnet.ts.netSRV_BASE_KERNEL=/var/lib/srv/images/arch-base/vmlinuxSRV_BASE_ROOTFS=/var/lib/srv/images/arch-base/rootfs-base.imgSRV_GUEST_AUTH_TAGS=tag:microvm
See the full configuration reference for every variable.
Then start the services:
sudo ./contrib/systemd/install.sh --enable-now
2.5 Validate
Run the end-to-end smoke test to confirm everything works:
sudo ./contrib/smoke/host-smoke.sh
The smoke test verifies systemd units, SSH reachability, guest creation, readiness polling, backup/restore, cgroup limits, and cleanup. It is the supported validation gate after install, restore, and upgrade.
See the smoke test reference for details on overrides and failure artifacts.
2.6 Next steps
- Walkthrough — create your first VM
- Running as a daemon — systemd details
3. Walkthrough
This walkthrough assumes you have completed the installation and the smoke test passes.
3.1 Create a VM
ssh srv new demo
demo created — state: provisioninginspect: ssh srv inspect democonnect: ssh root@demo
The control plane creates a reflink clone of the base rootfs, allocates a /30 network, mints a one-off Tailscale auth key, injects it through MMDS, and boots the VM through the jailer.
With custom sizing:
ssh srv new demo --cpus 4 --ram 8G --rootfs-size 20G
3.2 Check status
ssh srv list
NAME STATE CPUS MEMORY DISK TAILSCALEdemo ready 1 1.0 GiB 10.0 GiB 100.64.0.2
For detailed info:
ssh srv inspect demo
Machine-readable output for scripting:
ssh srv -- --json listssh srv -- --json inspect demo
3.3 Connect to the VM
Once the VM reports state: ready and shows a Tailscale IP, connect over the tailnet:
ssh root@demo
Because the guest image bootstraps tailscale up --ssh, Tailscale SSH handles authentication without needing per-user OpenSSH keys in the guest.
You can also get the IP from inspect:
ssh srv inspect demo# look for tailscale-ip and tailscale-name fields
3.4 View logs
ssh srv logs demossh srv logs demo serialssh srv logs demo firecrackerssh srv logs -f demo serial
The serial log shows guest boot, bootstrap, and tailscaled output. The Firecracker log shows VMM lifecycle events.
3.5 Stop, restart, and delete
ssh srv stop demossh srv start demossh srv restart demossh srv delete demo
Stopping a VM does a graceful shutdown. The guest rootfs persists across stop/start cycles.
3.6 Resize a VM
Resize requires the VM to be stopped. CPU and memory can be increased or decreased within limits, while rootfs can only grow:
ssh srv stop demossh srv resize demo --cpus 4 --ram 8G --rootfs-size 20Gssh srv start demo
See Resize a VM for details.
3.7 Back up and restore
ssh srv stop demossh srv backup create demossh srv backup list demossh srv restore demo
See Backup and restore for the full workflow.
3.8 Move a VM between hosts
ssh srv-a export demo | ssh srv-b import
See Export and import for the semantics.
3.9 Next steps
- Instance lifecycle — full command reference
- Networking overview — how VM networking works
- Configuration — all environment variables
4. Architecture
srv is a single-host control plane that manages Firecracker microVMs through an SSH command surface on a Tailscale tailnet.
4.1 Control plane
┌────────────────────────────────────────────────────────┐│ srv process ││ ││ ┌──────────┐ ┌──────────────┐ ┌──────────────────┐ ││ │ tsnet │ │ gliderlabs/ │ │ authorization │ ││ │ :22/tcp │──│ ssh │──│ (Tailscale │ ││ └──────────┘ └──────────────┘ │ WhoIs) │ ││ └──────────────────┘ ││ ││ ┌──────────┐ ┌──────────────┐ ┌──────────────────┐ ││ │ SQLite │ │ reflink │ │ Tailscale API │ ││ │ store │ │ cloner │ │ (auth key │ ││ └──────────┘ └──────────────┘ │ minting) │ ││ └──────────────────┘ ││ ││ ┌─────────────┐ ┌─────────────────────────────────┐ ││ │ Zen gateway │ │ per-VM HTTP proxy │ ││ │ manager │ │ (injects SRV_ZEN_API_KEY) │ ││ └─────────────┘ └─────────────────────────────────┘ │└────────────────────────────────────────────────────────┘ │ │ │ ┌────┴────┐ ┌────┴────┐ ┌────┴─────┐ │ net- │ │ vm- │ │ Tailscale│ │ helper │ │ runner │ │ coord. │ │ (root) │ │ (root) │ │ server │ └────┬────┘ └────┬────┘ └──────────┘ │ │ ┌────┴────┐ ┌────┴─────────────┐ │ TAP + │ │ Firecracker │ │ iptables│ │ + jailer │ │ + NAT │ │ → srv-vm:srv │ └─────────┘ │ → cgroup v2 │ └──────────────────┘
4.2 Key components
4.2.1 tsnet + gliderlabs/ssh
srv joins the tailnet as the hostname configured in SRV_HOSTNAME (default srv) and listens on the tailnet TCP port configured in SRV_LISTEN_ADDR (default :22). The gliderlabs/ssh library handles SSH exec requests and rejects shell sessions.
Caller identity comes from Tailscale WhoIs data resolved from the incoming tailnet connection — not from the SSH username.
4.2.2 SQLite store
All instance state, event history, command audits, and authorization decisions are stored in SQLite under SRV_DATA_DIR/state/app.db. Migrations run during startup and are additive.
4.2.3 Reflink cloning
When a new VM is created, the base rootfs is cloned using filesystem reflinks (on btrfs or reflink-enabled xfs). This gives each VM its own writable copy without actually copying the data — the copy is instantaneous on a reflink-capable filesystem.
4.2.4 Network helper
A root-only helper process owns all TAP device creation, iptables MASQUERADE rules, and FORWARD rules. The main srv process communicates with it over a unix socket.
4.2.5 VM runner
A root-owned process invokes Firecracker through the official jailer binary. After the jailer sets up the chroot and drops privileges, the microVM process runs as srv-vm:srv. Each VM is placed into its own cgroup v2 leaf under firecracker-vms/<name> with enforced limits.
4.2.6 Tailscale integration
For each new VM, the control plane:
- Mints a one-off Tailscale auth key using the configured OAuth credentials and tags
- Injects the key into the VM's MMDS payload
- The guest bootstrap service reads the key and runs
tailscale up --auth-key=... --ssh
On warm reboots (after a stop + start or restart), the guest reuses its persisted tailscaled state instead of minting a new key.
4.2.7 Zen gateway
When SRV_ZEN_API_KEY is set, srv binds a per-instance HTTP proxy on each VM's gateway IP and SRV_ZEN_GATEWAY_PORT. The proxy forwards guest requests to the upstream Zen API while injecting the host key. See Zen gateway for details.
4.3 Data paths
SRV_DATA_DIR/├── state/│ ├── app.db # SQLite store│ ├── tsnet/ # Tailscale persistent state│ └── host_key # SSH host key├── images/│ └── arch-base/│ ├── vmlinux # Guest kernel│ └── rootfs-base.img # Base rootfs (reflink source)├── instances/│ └── / │ ├── rootfs.img # Writable reflink clone│ ├── serial.log # Serial console output│ └── firecracker.log # VMM log├── backups/│ └── / │ └── / │ └── rootfs.img # Backup copy├── jailer/ # Jailer workspaces└── .snapshots/ # btrfs host snapshots └── /
4.4 Service architecture
| Service | User | Role |
|---|---|---|
srv.service |
srv |
Main control plane (tsnet, SSH, SQLite, API) |
srv-net-helper.service |
root |
TAP, iptables, NAT management |
srv-vm-runner.service |
root → srv-vm:srv |
Firecracker invocation, cgroup management |
The srv-vm-runner process starts as root, and the jailer drops the Firecracker process to srv-vm:srv within each VM's cgroup v2 leaf.
5. SSH command reference
All srv commands are invoked through the SSH surface. The service treats SSH as command transport only — there is no shell session.
5.1 Syntax
ssh srv [args]
For machine-readable output, use --json with non-streaming instance and backup commands. With OpenSSH, terminate local option parsing first:
ssh srv -- --json listssh srv -- --json inspect demossh srv -- --json status
5.2 Instance commands
| Command | Description |
|---|---|
new <name> |
Create a new VM |
new <name> --integration <name> |
Create and enable one or more existing integrations (admin only) |
new <name> --cpus <n> |
Create with custom vCPU count |
new <name> --ram <size> |
Create with custom memory (2G, 512M, or MiB integer) |
new <name> --rootfs-size <size> |
Create with custom rootfs size |
list |
Show visible VMs (all for admins, own for regular users) |
inspect <name> |
Show VM details and event history |
logs <name> |
View serial log (default) |
logs <name> serial |
View serial log |
logs <name> firecracker |
View Firecracker VMM log |
logs -f <name> [serial|firecracker] |
Follow log output |
top [--interval <duration>] |
Watch live per-VM CPU, memory, disk, and network usage; run with ssh -t and press q to exit |
5.3 Lifecycle commands
| Command | Description |
|---|---|
start <name> |
Start a stopped VM |
stop <name> |
Stop a running VM (graceful shutdown) |
restart <name> |
Stop and start a VM |
delete <name> |
Delete a VM and all its resources |
5.4 Resize command
| Command | Description |
|---|---|
resize <name> --cpus <n> |
Resize vCPUs on a stopped VM |
resize <name> --ram <size> |
Resize memory on a stopped VM |
resize <name> --rootfs-size <size> |
Resize rootfs (stopped VM, grow-only) |
All flags can be combined. Omitted flags keep the current value.
5.5 Backup commands
| Command | Description |
|---|---|
backup create <name> |
Create a backup from a stopped VM |
backup list <name> |
List backups for a VM |
restore <name> <backup-id> |
Restore a stopped VM from a backup |
5.6 Transfer commands
| Command | Description |
|---|---|
export <name> |
Stream a stopped VM as a tar artifact to stdout |
import |
Read a tar artifact from stdin and recreate the VM |
Usage: ssh srv-a export demo | ssh srv-b import
5.7 Host commands
| Command | Description |
|---|---|
status |
Admin-only host capacity and allocation summary |
snapshot create |
Admin-only host-local btrfs snapshot of SRV_DATA_DIR |
5.8 Integration commands
All integration commands are admin-only.
| Command | Description |
|---|---|
integration list |
List configured integrations |
integration inspect <name> |
Show integration target, auth mode, header references, and timestamps |
integration add http <name> --target <url> |
Create an HTTP integration |
integration add http <name> --target <url> --header NAME:VALUE |
Add a static upstream header |
integration add http <name> --target <url> --header-env NAME:SRV_SECRET_FOO |
Add an env-backed upstream header |
integration add http <name> --target <url> --bearer-env SRV_SECRET_FOO |
Inject bearer auth from a host env var |
integration add http <name> --target <url> --basic-user USER --basic-password-env SRV_SECRET_BAR |
Inject basic auth from host-managed credentials |
integration delete <name> |
Delete an integration that is no longer enabled on any VM |
integration enable <vm> <name> |
Enable an integration for a VM |
integration disable <vm> <name> |
Disable an integration for a VM |
integration list-enabled <vm> |
List integrations currently enabled for a VM |
5.9 Notes
newaccepts--cpus,--ram, and--rootfs-sizein any combinationnewalso accepts repeated--integration <name>flags; the create request fails if any requested integration cannot be enabledresizerequires the VM to be stopped; CPU and RAM may increase or decrease within limits, while rootfs is grow-onlyresize,backup, andrestoreall require the VM to be stopped- Backups are tied to the original VM record — they cannot be restored onto a different VM
- Export requires the source VM to be stopped
- Import recreates the VM under the same name and leaves it stopped
toprefreshes continuously by default; usessh -t srv top --interval 2sor similar to slow the redraw rate- Integration targets are intentionally narrow in v1: HTTP only, operator-managed, no guest-supplied raw secrets, and no automatic outbound interception
6. HTTP integrations
srv can expose selected upstream HTTP APIs to a VM through a host-side integration gateway without storing the real upstream credentials inside the guest.
This feature is intentionally narrow in v1:
- Admin or operator managed only
- HTTP integrations only
- No guest-side secret submission over SSH
- No discovery, tagging, or attachment system
- No transparent interception of arbitrary outbound HTTPS
6.1 How it works
┌──────────────────────────────────────────────────────┐│ Host ││ ││ srv control plane ││ │ ││ │ integration add/enable ││ ▼ ││ SQLite metadata + /etc/srv/srv.env ││ │ ││ ▼ ││ Integration gateway :11435 on VM gateway IP ││ │ ││ │ /integrations/openai/... ││ │ injects headers/auth from SRV_SECRET_* ││ ▼ ││ upstream HTTP API │└──────────────────────────────────────────────────────┘ │ │ /30 network │ ┌────────┴────────┐ │ Guest VM │ │ │ │ curl http:// │ │ :11435 │ │ /integrations/ │ │ openai/... │ └─────────────────┘
Each enabled VM gets access to a host-side HTTP listener on its gateway IP and SRV_INTEGRATION_GATEWAY_PORT (default 11435). Requests are routed by path prefix:
http://<gateway-ip>:11435/integrations/<name>/...
The host looks up the named integration, rewrites the request to the configured upstream target, injects the configured auth or headers, and forwards the request.
6.2 Supported auth and header modes
An integration can use any combination of:
- Static headers via
--header NAME:VALUE - Env-backed headers via
--header-env NAME:SRV_SECRET_FOO - Bearer auth via
--bearer-env SRV_SECRET_FOO - Basic auth via
--basic-user USER --basic-password-env SRV_SECRET_BAR
Secrets are referenced by env var name only. The raw values are expected to live on the host, typically in /etc/srv/srv.env.
For --header-env, the env var value becomes the exact header value sent upstream.
6.3 Security behavior
The gateway is designed so the guest gets access to the upstream capability, not the underlying secret material.
- The gateway only accepts requests from the owning guest IP
- Guest-supplied
AuthorizationandProxy-Authorizationheaders are stripped on the host - Env-backed secrets are read on the host at request time
- If a referenced
SRV_SECRET_*env var is missing, the gateway returns502 Bad Gateway - Request paths are normalized before routing so traversal attempts cannot escape the configured integration prefix
6.4 Example setup
Add host secrets to /etc/srv/srv.env:
SRV_INTEGRATION_GATEWAY_PORT=11435SRV_SECRET_OPENAI_PROD=sk-live-redactedSRV_SECRET_VENDOR_API_KEY=vendor-key-redacted
Create and enable integrations from the control plane:
ssh srv integration add http openai --target https://api.openai.com/v1 --bearer-env SRV_SECRET_OPENAI_PRODssh srv integration add http vendor --target https://vendor.example/api --header-env X-API-Key:SRV_SECRET_VENDOR_API_KEYssh srv integration enable demo openaissh srv integration enable demo vendorssh srv inspect demo
The VM's inspect output will include URLs such as:
integrations:- openai: http://172.28.0.1:11435/integrations/openai- vendor: http://172.28.0.1:11435/integrations/vendor
From inside the guest, requests go to the gateway IP:
curl http://$(ip route show default | awk '{print $3}'):11435/integrations/openai/modelscurl http://$(ip route show default | awk '{print $3}'):11435/integrations/vendor/ping
6.5 Related references
7. Running as a daemon
srv is designed to run as a set of systemd services on a prepared host. The installer sets up three units that must all be active for the control plane to function.
7.1 Systemd units
| Unit | Purpose |
|---|---|
srv.service |
The main control-plane process. Joins the tailnet via tsnet and exposes the SSH API on port 22. |
srv-net-helper.service |
Root-only helper that owns TAP device creation, iptables MASQUERADE, and FORWARD rules for guest NAT. |
srv-vm-runner.service |
Root-owned process that invokes Firecracker through the jailer, drops to srv-vm:srv, and manages per-VM cgroup v2 leaves. |
7.2 Common systemd commands
# Check statussudo systemctl status srv srv-net-helper srv-vm-runner# View logssudo journalctl -u srv -fsudo journalctl -u srv-vm-runner -f# Restart all servicessudo systemctl stop srv srv-net-helper srv-vm-runnersleep 5sudo systemctl start srv-vm-runner srv-net-helper srv
srv-vm-runner.service must keep User=root, Group=srv, Delegate=cpu memory pids, DelegateSubgroup=supervisor, and a group-accessible socket under /run/srv-vm-runner/. Do not add NoNewPrivileges=yes — the jailer must drop privileges and exec Firecracker on real hosts.
7.3 Host reboot recovery
Same-host reboot recovery is built into the control plane. When srv comes back under systemd, previously active instances are restarted automatically.
You do not need to manually restart VMs after a host reboot.
7.4 Environment file
Configuration lives in /etc/srv/srv.env. See the configuration reference for every variable.
If /etc/srv/srv.env already exists before an upgrade, the installer keeps it by default. After upgrade, verify that SRV_FIRECRACKER_BIN and SRV_JAILER_BIN still point at the intended static binaries.
7.5 Upgrading
-
Take a quiesced backup or a host snapshot.
-
Build and install the new
srv,srv-net-helper, andsrv-vm-runnerbinaries. -
If you also refreshed Firecracker, verify
/etc/srv/srv.envstill points at the correct binaries. -
Restart:
sudo systemctl stop srv srv-net-helper srv-vm-runnersleep 5sudo systemctl start srv-vm-runner srv-net-helper srv -
Run the smoke test:
sudo ./contrib/smoke/host-smoke.sh
See the operations reference for full upgrade and rollback procedures.
7.6 Smoke test as a validation gate
The smoke test is part of the supported workflow after install, restore, and upgrade — not an optional extra:
sudo ./contrib/smoke/host-smoke.sh
Overrides:
ENV_PATH=/etc/srv/srv.env.alt— alternate environment fileSMOKE_SSH_HOST=srv-test— alternate control-plane hostnameINSTANCE_NAME=smoke-manual— force a predictable instance nameKEEP_FAILED=1— leave a failed instance intact for debuggingREADY_TIMEOUT_SECONDS=300— override guest-ready timeout
9. Configuration reference
srv is configured through environment variables, read from /etc/srv/srv.env by the systemd units.
9.1 Tailscale credentials
| Variable | Required | Description |
|---|---|---|
TS_AUTHKEY |
Yes* | Tailscale auth key for the control-plane node. After first start, tsnet state usually persists, but keeping this configured is the simplest setup. |
TS_CLIENT_ID |
Yes | Tailscale OAuth client ID for minting guest auth keys |
TS_CLIENT_SECRET |
Yes | Tailscale OAuth client secret for minting guest auth keys |
TS_TAILNET |
Yes | Tailnet name used for API operations |
Use either TS_AUTHKEY or TS_CLIENT_ID/TS_CLIENT_SECRET for the control-plane node. The OAuth flow is preferred for guest auth key minting.
9.2 Core settings
| Variable | Default | Description |
|---|---|---|
SRV_HOSTNAME |
srv |
Tailscale hostname for the control plane |
SRV_LISTEN_ADDR |
:22 |
Tailnet TCP listen address for the SSH API |
SRV_DATA_DIR |
/var/lib/srv |
State directory. Must be on the same reflink-capable filesystem as SRV_BASE_ROOTFS. |
SRV_NET_HELPER_SOCKET |
/run/srv/net-helper.sock |
Unix socket for the privileged network helper |
SRV_VM_RUNNER_SOCKET |
/run/srv-vm-runner/vm-runner.sock |
Unix socket for the Firecracker VM runner |
SRV_FIRECRACKER_BIN |
/usr/bin/firecracker |
Path to the Firecracker binary |
SRV_JAILER_BIN |
/usr/bin/jailer |
Path to the jailer binary |
9.3 Guest artifacts
| Variable | Default | Description |
|---|---|---|
SRV_BASE_KERNEL |
(required) | Path to the Firecracker guest kernel image |
SRV_BASE_ROOTFS |
(required) | Path to the base rootfs image. Must be on the same reflink-capable filesystem as SRV_DATA_DIR. |
SRV_BASE_INITRD |
(empty) | Optional initrd image path |
9.4 Guest defaults
| Variable | Default | Description |
|---|---|---|
SRV_VM_VCPUS |
1 |
Default vCPU count for new VMs |
SRV_VM_MEMORY_MIB |
1024 |
Default memory in MiB for new VMs |
SRV_VM_PIDS_MAX |
512 |
Maximum tasks in each VM cgroup |
SRV_GUEST_AUTH_TAGS |
(required) | Comma-separated tags applied to guest auth keys |
SRV_GUEST_AUTH_EXPIRY |
15m |
TTL for one-off guest auth keys |
SRV_GUEST_READY_TIMEOUT |
2m |
Time to wait for a guest to join the tailnet |
9.5 Networking
| Variable | Default | Description |
|---|---|---|
SRV_VM_NETWORK_CIDR |
172.28.0.0/16 |
IPv4 network reserved for VM /30 allocations |
SRV_VM_DNS |
1.1.1.1,1.0.0.1 |
Comma-separated guest nameservers |
SRV_OUTBOUND_IFACE |
auto-detected | Optional override for the host interface used for NAT |
9.6 Authorization
| Variable | Default | Description |
|---|---|---|
SRV_ALLOWED_USERS |
(empty) | Comma-separated Tailscale login allowlist. Empty means allow all tailnet users. |
SRV_ADMIN_USERS |
(empty) | Comma-separated Tailscale logins with cross-instance visibility and management rights |
9.7 Zen gateway
| Variable | Default | Description |
|---|---|---|
SRV_ZEN_API_KEY |
(empty) | OpenCode Zen API key. When set, enables per-VM Zen gateways. |
SRV_ZEN_BASE_URL |
https://opencode.ai/zen |
Upstream Zen API base URL |
SRV_ZEN_GATEWAY_PORT |
11434 |
TCP port for each VM's gateway proxy |
9.8 HTTP integrations
| Variable | Default | Description |
|---|---|---|
SRV_INTEGRATION_GATEWAY_PORT |
11435 |
TCP port for each VM's host-side integration gateway |
SRV_SECRET_* |
(empty) | Host-managed secret env vars referenced by integration definitions for bearer auth, basic auth passwords, or env-backed headers |
The SRV_SECRET_* entries are a naming convention rather than a fixed list. They are meant to live in /etc/srv/srv.env so the SSH control surface only ever receives secret names such as SRV_SECRET_OPENAI_PROD, not the raw credential values.
9.9 Alternate Tailscale endpoints
| Variable | Default | Description |
|---|---|---|
TS_CONTROL_URL |
(Tailscale default) | Alternate Tailscale coordination server (e.g. Headscale) |
TS_API_BASE_URL |
https://api.tailscale.com |
Alternate Tailscale API base URL |
SRV_GUEST_TAILSCALE_CONTROL_URL |
(same as TS_CONTROL_URL) |
Alternate control URL injected into guest bootstrap |
9.10 Misc
| Variable | Default | Description |
|---|---|---|
SRV_LOG_LEVEL |
info |
Log level |
SRV_EXTRA_KERNEL_ARGS |
(empty) | Additional kernel arguments appended to the guest boot line |
SRV_JAILER_BASE_DIR |
SRV_DATA_DIR/jailer |
Base directory for jailer workspaces. Must be on the same filesystem as SRV_DATA_DIR. |
9.11 Path constraints
Several paths must share the same reflink-capable filesystem (typically btrfs or reflink-enabled xfs):
SRV_DATA_DIR— host state directorySRV_BASE_ROOTFS— base guest imageSRV_JAILER_BASE_DIR— jailer workspaces (hard-links log files into the jail)
Cross-filesystem hard-links fail, so keeping these on the same filesystem is required.
10. Guest image reference
The default guest image is an Arch Linux rootfs designed for srv. It is built by images/arch-base/build.sh and contains a full Linux userspace with developer tooling preinstalled.
10.1 Artifacts
The builder produces:
| File | Description |
|---|---|
vmlinux |
x86_64 Firecracker-compatible kernel, built from 6.12 LTS with Firecracker's microvm config as baseline |
rootfs-base.img |
Sparse ext4 image populated via pacstrap |
manifest.txt |
Build manifest with version info |
10.2 Included packages
The image is intentionally not minimal — it includes tooling for development and AI agent workflows:
- Docker, docker-compose
- Go, gopls
- Odin, odinfmt, OLS
- Neovim with prewarmed LazyVim (BMW heritage amber theme)
- OpenCode and Pi CLIs with per-VM Zen gateway bootstrap
- Git, fd, ripgrep, tree-sitter-cli, gcc, perf, valgrind
- iptables-nft with IPv4/IPv6 nftables support
- Kernel module tree matching the custom kernel (overlay, br_netfilter, Docker-related modules)
10.3 Bootstrap service
The guest includes srv-bootstrap.service, which runs on every boot:
- Discovers the primary virtio interface from the kernel-provided default route
- Adds a route to Firecracker MMDS at
169.254.169.254/32 - Reads the MMDS payload from
http://169.254.169.254/withAccept: application/json - Sets the hostname from
srv.hostname - Starts
tailscaled - Runs
tailscale up --auth-key=... --hostname=... --sshon the first authenticated boot (relies on persisted state on later boots) - Writes
/root/.config/opencode/opencode.jsonplus Pi config under/root/.pi/agent/to the per-VM host Zen gateway whenSRV_ZEN_API_KEYis configured, or removes those managed defaults when the gateway is disabled - Writes
/var/lib/srv/bootstrap.donewith the latest successful bootstrap timestamp
The --ssh flag on tailscale up is intentional — it enables Tailscale SSH so the control plane's connect: ssh root@<name> output works through the tailnet without per-user OpenSSH keys.
10.4 Kernel details
- Starts from Firecracker's
microvm-kernel-ci-x86_64-6.1.configand runsolddefconfigagainst the selected source tree - Enables
CONFIG_PCI=yfor ACPI initialization (required by current Firecracker x86 builds) - Disables
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICESso the kernel prefers ACPI discovery - Enables Landlock and adds it to
CONFIG_LSM(keeps pacman's download sandbox working) - Builds real
.komodule files for Docker, overlay, br_netfilter, and nftables
10.5 DNS
/etc/resolv.conf is symlinked to /proc/net/pnp so the kernel ip= boot parameter provides working DNS before tailscale up runs.
10.6 Logging
journald is configured to forward logs to ttyS0, making the guest bootstrap flow visible in each instance's serial log (ssh srv logs <name> serial).
11. Operations runbook
This runbook is for the supported prepared-host path: systemd-managed srv, srv-net-helper, and srv-vm-runner on a Linux host with cgroup v2, /dev/kvm, SRV_DATA_DIR on a reflink-capable filesystem such as btrfs or reflink-enabled xfs, SRV_BASE_ROOTFS on the same filesystem, and the official static Firecracker/jailer release pair.
Same-host reboot recovery is already built into the control plane: when srv comes back under systemd, previously active instances are restarted automatically. The steps below cover the larger operator workflows that were previously only implied.
11.1 Supported Upgrade Lanes
- Guest-local maintenance is still guest-local. Running
pacman -Syuinside a guest is supported when you want that guest to drift independently, but it is not the control plane's golden-image rollout path. - Kernel roll-forward is a host-managed lane. Existing stopped guests pick up the current
SRV_BASE_KERNELand optionalSRV_BASE_INITRDon their nextstartorrestart. - Rootfs golden-image rollout is a new-guest lane. Updating
SRV_BASE_ROOTFSchanges what futurenewclones from, but it does not rewrite existing guests' writable disks. - Schema rollout is tied to the control-plane binary. SQLite migrations run during
srvstartup and are currently additive; rollback means restoring the pre-upgrade backup together with the previous binary set.
11.2 Backup
Take backups from a quiesced host so SRV_DATA_DIR and the SQLite WAL state are self-consistent.
If you need a fast host-local point-in-time copy without shutting the control plane down first, use the built-in snapshot barrier instead:
ssh srv snapshot create
That command is admin-only. It briefly rejects every other SSH command, waits for already admitted commands to finish, checkpoints SQLite, flushes the filesystem, and then creates a readonly btrfs snapshot of SRV_DATA_DIR under SRV_DATA_DIR/.snapshots/<timestamp>.
Snapshot semantics are intentionally limited and explicit:
- control-plane consistent
- stopped guests fully safe
- running guests crash-consistent
Important caveats for this path:
SRV_DATA_DIRitself must be a btrfs subvolume root. A plain directory on btrfs is not enough.- The app snapshots
SRV_DATA_DIRonly./etc/srv, environment files, and unit overrides still need the existing operator-managed backup flow below. - Remote
btrfs send/receivereplication is intentionally out of the barrier path. If you use it for DR or warm standby, run it after the local snapshot already exists.
- Stop the services.
sudo systemctl stop srv srv-net-helper srv-vm-runner
- Capture the state directory, environment file, and any unit overrides.
sudo tar --xattrs --acls --numeric-owner \ --ignore-failed-read \ -C / \ -czf /var/tmp/srv-backup-$(date -u +%Y%m%dT%H%M%SZ).tar.gz \ etc/srv \ var/lib/srv \ etc/systemd/system/srv.service.d \ etc/systemd/system/srv-net-helper.service.d \ etc/systemd/system/srv-vm-runner.service.d \ etc/systemd/system.control
- Start the services again if this was only a backup window.
sudo systemctl start srv-vm-runner srv-net-helper srv
Notes:
- Preserve the configured paths in
/etc/srv/srv.env. Instance rows store absolute runtime paths such asSRV_DATA_DIR/instances/<name>/rootfs.img, so changingSRV_DATA_DIR,SRV_JAILER_BASE_DIR, or the base artifact paths during restore is not a supported relocation workflow. - Keep
SRV_JAILER_BASE_DIRon the same filesystem asSRV_DATA_DIR; the runner hard-links log files into the jail and cross-filesystem links fail.
11.3 Move One Stopped VM Between Hosts
For single-VM cutover, use the portable stopped-VM stream instead of copying SQLite rows or the whole host state directory:
ssh srv-a export demo | ssh srv-b import
Operational notes:
- The source VM must be stopped before export.
- Import recreates the VM under the same name and leaves it stopped. Start it explicitly on the destination after the stream completes.
- The artifact preserves portable metadata such as name, creator, machine shape, rootfs size, and last-known Tailscale name or IP.
- Serial and Firecracker logs are included when present, but each is capped to the newest
256 MiBduring export. - Import regenerates destination-local runtime state such as absolute file paths, tap device wiring, guest MAC, and VM subnet allocation.
- The copied rootfs carries the guest's durable Tailscale identity, so do not boot the source and destination copies at the same time.
- The destination host uses its currently configured
SRV_BASE_KERNELand optionalSRV_BASE_INITRDon the first laterstart; only the writable guest disk and optional logs come from the streamed artifact.
11.4 Restore Or Rebuild A Host
- Prepare a fresh host with the normal prerequisites: Tailscale, cgroup v2,
/dev/kvm, reflink-capable storage such asbtrfsor reflink-enabledxfsshared bySRV_DATA_DIRandSRV_BASE_ROOTFS, and the repo checkout. - Reinstall the managed assets.
sudo ./contrib/systemd/install.sh
- Re-enable IPv4 forwarding for guest NAT on the rebuilt host.
sudo tee /etc/sysctl.d/90-srv-ip-forward.conf >/dev/null <<'EOF'net.ipv4.ip_forward = 1EOFsudo sysctl --system
- Restore
/etc/srv/srv.envfrom backup and verify thatSRV_FIRECRACKER_BIN,SRV_JAILER_BIN,SRV_DATA_DIR,SRV_BASE_KERNEL,SRV_BASE_ROOTFS, and any optionalSRV_BASE_INITRDstill point at the intended paths. - Restore the saved
SRV_DATA_DIRtree to the same path. - Reload systemd and start the services.
sudo systemctl daemon-reloadsudo systemctl enable --now srv-vm-runner srv-net-helper srv
- Run the prepared-host validation gate before handing the host back to users.
sudo ./contrib/smoke/host-smoke.sh
That smoke pass is part of the supported restore workflow now, not an optional extra.
11.5 Upgrade And Rollback
11.5.1 Control Plane And Schema
- Take the quiesced backup above before starting the upgraded
srvbinary for the first time. - Build and install the new
srv,srv-net-helper, andsrv-vm-runnerbinaries. - If you also refreshed Firecracker, use the matching official static
firecrackerandjailerpair and verify/etc/srv/srv.envstill points at those paths. - Restart the services.
sudo systemctl stop srv srv-net-helper srv-vm-runnersleep 5sudo systemctl start srv-vm-runner srv-net-helper srv
- Run the host smoke test.
sudo ./contrib/smoke/host-smoke.sh
Rollback for control-plane or schema regressions is restore-based:
- Stop the services.
- Reinstall the previous binaries and previous static Firecracker/jailer pair if those changed.
- Restore the pre-upgrade backup of
/etc/srvandSRV_DATA_DIR. - Restart the services.
- Run the same host smoke test again.
11.5.2 Kernel Rollout For Existing Guests
- Rebuild the kernel artifact under images/arch-base/.
- Update
SRV_BASE_KERNELand optionalSRV_BASE_INITRDin/etc/srv/srv.env. - Restart the units if needed so the runner sees the new base paths.
- Stop and start guests one at a time, or let already stopped guests pick up the new boot artifacts on their next
start. - Use a canary guest first, then roll wider once it passes the workload check you care about.
Rollback is just pointing SRV_BASE_KERNEL or SRV_BASE_INITRD back to the previous artifact and restarting the affected guests again.
11.5.3 Golden Rootfs Rollout
- Rebuild
rootfs-base.imgunder images/arch-base/. - Point
SRV_BASE_ROOTFSat the new image. - Create a canary guest with
ssh srv new <name>and validate it. - After the canary passes, new guests will clone from the new base image.
Rollback for the golden rootfs lane is also path-based: point SRV_BASE_ROOTFS back to the previous image before creating more guests.
Important caveat: existing guests keep their own writable rootfs.img. There is no host-driven in-place existing-rootfs conversion workflow yet. For an existing guest that needs OS updates today, the supported choices are:
- manage that guest locally with
pacman -Syu, accepting guest-local drift - create a replacement guest from the refreshed golden image and migrate the workload or data to it
11.6 Host Hardening And Caveats
- cgroup v2 is required. The runner now depends on a delegated cgroup v2 subtree to place each VM into its own
firecracker-vms/<name>leaf with enforcedcpu.max,memory.max,memory.swap.max, andpids.max. - IPv4 forwarding must stay enabled on the host. Guest egress depends on forwarding packets from each TAP device through the host's outbound interface after the helper installs MASQUERADE and
FORWARDrules. srv-vm-runner.servicemust keepUser=root,Group=srv,Delegate=cpu memory pids,DelegateSubgroup=supervisor, and a group-accessible socket under/run/srv-vm-runner/.- Do not add
NoNewPrivileges=yestosrv-vm-runner.service; the jailer must drop privileges andexecFirecracker on real hosts. - Keep using the official static Firecracker and jailer release pairing. Distro-provided dynamically linked binaries can fail after chroot before the API socket appears.
- Preserve
/etc/srv/srv.envacross reinstall or upgrade unless you are intentionally changing configuration and have accounted for the stored absolute paths. - Keep
SRV_DATA_DIRandSRV_BASE_ROOTFSon the same reflink-capable filesystem, such asbtrfsor reflink-enabledxfs. Fast per-instance provisioning still depends on reflink cloning the configured base rootfs. - Run
sudo ./contrib/smoke/host-smoke.shafter install, restore, control-plane upgrade, and base-image changes.
12. Troubleshooting
12.1 VM is stuck in provisioning or failed
Check the control-plane view first:
ssh srv inspect
Look for the last_error field and state. Then check the guest and VMM logs:
ssh srv logs serial ssh srv logs firecracker
Common causes:
- Missing or misconfigured base rootfs/kernel
- Tailscale OAuth credentials expired or invalid
- Network helper not running (
systemctl status srv-net-helper) - VM runner not running (
systemctl status srv-vm-runner) /dev/kvmnot accessible
12.2 VM boots but Tailscale SSH doesn't work
If ssh srv inspect <name> shows state: ready and a Tailscale IP but you can't SSH in:
- Check the serial log for bootstrap errors:
ssh srv logs <name> serial - Look for
srv-bootstrap.servicefailures - Verify that
SRV_GUEST_AUTH_TAGSmatches a tag your OAuth client can mint keys for - Verify that the guest can reach the Tailscale coordination server (check DNS and outbound connectivity from inside the VM if possible)
12.3 Guest can't reach the internet
Guest egress depends on IPv4 forwarding and MASQUERADE rules:
# Check forwardingsysctl net.ipv4.ip_forward# Check iptables rulessudo iptables -t nat -L -nsudo iptables -L FORWARD -n# Check the network helpersudo systemctl status srv-net-helper
If net.ipv4.ip_forward is 0, re-enable it:
sudo sysctl -w net.ipv4.ip_forward=1
12.4 VM runs out of disk space
Check from inside the VM:
df -h
If the rootfs is full, you can resize the VM:
ssh srv stop ssh srv resize --rootfs-size 20G ssh srv start
On the host, check SRV_DATA_DIR usage:
df -h /var/lib/srv
12.5 Check cgroup limits
If a VM is being throttled or OOM-killed, check its cgroup v2 limits:
cat /sys/fs/cgroup/firecracker-vms//cpu.max cat /sys/fs/cgroup/firecracker-vms//memory.max cat /sys/fs/cgroup/firecracker-vms//memory.swap.max cat /sys/fs/cgroup/firecracker-vms//pids.max
12.6 Firecracker or jailer errors
The VM runner logs to journald:
sudo journalctl -u srv-vm-runner -f
Common issues:
- Jailer chroot setup failures — check that
SRV_JAILER_BASE_DIRis on the same filesystem asSRV_DATA_DIR - Permission errors — verify
srv-vm-runner.servicekeepsUser=root,Group=srv,Delegate=cpu memory pids,DelegateSubgroup=supervisor, and noNoNewPrivileges=yes /dev/kvmnot available — check permissions and that KVM is loaded
12.7 Smoke test
The end-to-end smoke test validates the full host setup:
sudo ./contrib/smoke/host-smoke.sh
Overrides:
| Variable | Default | Description |
|---|---|---|
ENV_PATH |
/etc/srv/srv.env |
Alternate environment file |
SMOKE_SSH_HOST |
srv |
Alternate control-plane hostname |
INSTANCE_NAME |
smoke-<random> |
Force a predictable instance name |
ARTIFACT_ROOT |
/var/tmp/srv-smoke |
Artifact storage root |
KEEP_FAILED |
(unset) | Leave a failed instance intact for debugging |
READY_TIMEOUT_SECONDS |
derived from config | Override guest-ready timeout |
GUEST_SSH_READY_TIMEOUT |
45 |
Seconds to wait for guest SSH after ready |
On failure, artifacts are written to /var/tmp/srv-smoke/<instance>/ including inspect, logs, systemctl status, and journalctl output.
13. Backup and restore
srv provides in-place backup and restore for stopped VMs. This is useful for checkpointing VMs before risky changes and rolling them back when something goes wrong.
13.1 Create a backup
ssh srv stop demossh srv backup create demo
This copies the current rootfs image into the backup store under SRV_DATA_DIR/backups/demo/<backup-id>/.
13.2 List backups
ssh srv backup list demo
Each backup has a unique ID and timestamp.
13.3 Restore from a backup
ssh srv restore demo
This replaces the current rootfs with the backup's rootfs. The VM must be stopped.
13.4 Full workflow
# Create and configure a VMssh srv new demossh root@demo # install packages, configure services# Checkpoint before risky changesssh srv stop demossh srv backup create demossh srv start demo# ... make changes that break things ...# Reset to the checkpointssh srv stop demossh srv backup list demossh srv restore demo ssh srv start demo# Verify the restore worked — the VM should be back to the checkpoint state
13.5 Constraints
- Stopped only: both backup and restore require the VM to be stopped
- In-place only: backups are tied to the original VM record. They cannot be restored onto a newly created VM that reuses the same name
- Single host: backups live on the same host. For cross-host migration, use export/import
- Rootfs only: backups capture the writable rootfs. The kernel and initrd come from the host's current configuration on the next
start
13.6 Backup storage
Backups are stored under:
SRV_DATA_DIR/backups///
Monitor disk usage if you create many backups — each one is a full copy of the rootfs at that point in time.
14. Build a custom guest image
The default guest image is an Arch Linux rootfs with Docker, Go, Neovim, OpenCode, Pi, and common development tools. You can customize it by modifying the overlay or building a completely different rootfs.
14.1 Default image builder
The images/arch-base/ directory contains the official builder:
sudo OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh
This produces two artifacts:
vmlinux— an x86_64 Firecracker-compatible kernel built from the 6.12 LTS sourcerootfs-base.img— a sparse ext4 image populated viapacstrap
14.1.1 What the image includes
- Docker, docker-compose
- Go, gopls, Odin, odinfmt, OLS
- Neovim with a prewarmed LazyVim config (BMW heritage amber theme)
- OpenCode and Pi CLIs with per-VM Zen gateway bootstrap
- Git, fd, ripgrep, tree-sitter-cli, gcc, perf, valgrind
- iptables-nft with IPv4/IPv6 nftables support
srv-bootstrap.servicefor Tailscale and MMDS setup
14.1.2 Build overrides
# Change kernel versionsudo KERNEL_VERSION=6.1.167 OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh# Change rootfs size (default 10G)ROOTFS_SIZE=20G sudo OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh# Reduce kernel build parallelismsudo KERNEL_JOBS=2 OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh
14.2 Podman build on non-Arch hosts
If your host is not Arch Linux and does not provide pacstrap:
sudo podman run --rm --privileged --network host \ -v "$PWD":/work \ -v /var/lib/srv/images/arch-base:/var/lib/srv/images/arch-base \ -w /work \ docker.io/library/archlinux:latest \ bash -lc ' set -euo pipefail pacman -Sy --noconfirm archlinux-keyring pacman -Syu --noconfirm arch-install-scripts base-devel bc e2fsprogs rsync curl systemd OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh '
--privilegedis required because the builder useslosetup,mkfs.ext4, andmount--network hostkeeps mirror and kernel downloads simple
14.3 Customizing the overlay
The images/arch-base/overlay/ directory contains files that are copied into the guest rootfs during the build. Changes here only affect new guests after you rebuild rootfs-base.img and refresh SRV_BASE_ROOTFS.
After modifying the overlay:
sudo OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh# Then update SRV_BASE_ROOTFS in /etc/srv/srv.env to point at the new image
14.4 Rolling out a new image
Rootfs changes only affect newly created guests. Existing guests keep their own writable rootfs.img. There is no host-driven in-place rootfs update.
To migrate an existing guest to a new base image:
- Rebuild and update
SRV_BASE_ROOTFS - Create a new guest with
ssh srv new <name> - Migrate workload data to the new guest
- Delete the old guest
Or manage the existing guest locally with pacman -Syu, accepting guest-local drift.
14.5 Kernel rollout
Kernel roll-forward is simpler. Existing stopped guests pick up the currently configured SRV_BASE_KERNEL and optional SRV_BASE_INITRD on their next start or restart:
- Rebuild the kernel artifact
- Update
SRV_BASE_KERNEL(andSRV_BASE_INITRDif applicable) in/etc/srv/srv.env - Restart the units if needed
- Stop and start guests, or let stopped guests pick up the new kernel on their next start
Rollback: point the path back to the previous artifact and restart guests again.
15. Export and import
The export and import commands stream a portable VM artifact between hosts. This is the supported way to move a VM from one srv host to another.
15.1 Export
On the source host:
ssh srv stop demossh srv export demo
The command writes a tar stream to stdout. The artifact contains:
- A versioned manifest
- The writable
rootfs.img - Serial and Firecracker logs when present (each capped to the newest 256 MiB)
15.2 Import
Pipe the export stream directly into the destination host:
ssh srv-a export demo | ssh srv-b import
Import creates the VM under the same name on the destination host, allocates new runtime paths and network state, and leaves the VM stopped.
15.3 Start after import
ssh srv-b start demo
The destination host uses its currently configured SRV_BASE_KERNEL and optional SRV_BASE_INITRD — only the writable disk and optional logs come from the streamed artifact.
15.4 Important semantics
Because the guest's durable Tailscale identity lives in the copied rootfs:
- Do not boot the source and destination VMs at the same time — this would cause a Tailscale key conflict
- Treat export/import as cutover or move semantics, not cloning semantics
The destination host regenerates:
- Absolute file paths (runtime directories)
- TAP device wiring
- Guest MAC address
- VM
/30subnet allocation
The artifact preserves:
- VM name
- Creator identity
- Machine shape (vCPUs, memory, rootfs size)
- Last-known Tailscale name and IP (as cached state)
15.5 Save to a file
You can also save the export to a file for later or offline transfer:
ssh srv export demo > demo-backup.tar
Then on the destination:
cat demo-backup.tar | ssh srv import
16. Host snapshots
The snapshot create command is a host-level disaster-recovery primitive that creates a consistent point-in-time copy of SRV_DATA_DIR.
16.1 How it works
ssh srv snapshot create
This is an admin-only command that:
- Briefly rejects all other SSH commands
- Waits for already admitted commands to finish
- Checkpoints SQLite
- Flushes the filesystem
- Creates a readonly btrfs snapshot of
SRV_DATA_DIRunderSRV_DATA_DIR/.snapshots/<timestamp>
16.2 Snapshot semantics
The snapshot is intentionally simple:
- Control-plane consistent: SQLite state is checkpointed
- Stopped guests fully safe: rootfs data is on disk and consistent
- Running guests crash-consistent only: like pulling the power on a running VM — the filesystem may need journal replay on restore
This is not a substitute for per-VM backups if you need application-consistent snapshots of running guests. Stop guests first or use ssh srv backup create for VM-level consistency.
16.3 Prerequisites
SRV_DATA_DIRmust be a btrfs subvolume root — a plain directory on btrfs is not enough- The snapshot covers
SRV_DATA_DIRonly./etc/srv, environment files, and unit overrides still need your existing operator-managed backup flow
16.4 Use with btrfs send/receive
You can combine snapshots with btrfs send/receive for off-host replication:
# After creating a snapshotsudo btrfs send SRV_DATA_DIR/.snapshots/ | \ ssh backup-host btrfs receive /backup/srv/
Run btrfs send/receive after the local snapshot already exists — the snapshot barrier is not involved in the replication step.
16.5 Not included
/etc/srv/srv.env— back this up separately- Systemd unit overrides in
/etc/systemd/system/srv*.service.d/ - Firecracker and jailer binaries
See the operations reference for the full host backup and restore workflow.
17. Instance lifecycle
This page covers the full lifecycle of a srv VM from creation to deletion.
17.1 Create
ssh srv new
With custom sizing:
ssh srv new --cpus --ram --rootfs-size
--ram and --rootfs-size accept units like 2G, 512M, or plain MiB integers. --cpus must be 1 or an even number, up to 32.
The control plane:
- Clones the base rootfs as a reflink
- Allocates a
/30network subnet and TAP device - Mints a one-off Tailscale auth key via MMDS
- Boots the VM through the jailer with cgroup v2 enforcement
- Polls until the guest reports
state: ready
17.2 Inspect
ssh srv inspect
Shows instance state, vCPU count, memory, rootfs size, network addresses, Tailscale name and IP, and event history.
Machine-readable:
ssh srv -- --json inspect
17.3 List and status
ssh srv listssh srv status
list shows the VMs visible to the caller: all VMs for admins, or only the caller's own VMs for regular users. status is admin-only and reports host capacity — instance counts plus CPU, memory, and disk headroom.
17.4 Logs
ssh srv logs ssh srv logs serial ssh srv logs firecracker ssh srv logs -f serial ssh srv logs -f firecracker
Both log sources are append-only. Always check the newest lines first.
17.5 Stop
ssh srv stop
Performs a graceful shutdown through Firecracker. The rootfs is preserved on disk.
17.6 Start
ssh srv start
Boots a previously stopped VM. Stopped guests pick up the currently configured SRV_BASE_KERNEL and optional SRV_BASE_INITRD on their next start.
17.7 Restart
ssh srv restart
Stops and starts the VM in one command. Also picks up the current kernel and initrd.
17.8 Delete
ssh srv delete
Removes the VM's rootfs, runtime directory, TAP device, cgroup, and jailer workspace. This is irreversible.
17.9 Warm boot behavior
When a VM that already has tailscaled state reboots (via start or restart), it reuses its existing Tailscale identity instead of minting a new auth key. This is called a warm boot.
On cold boot (first new), the guest bootstrap service reads the Tailscale auth key from MMDS and runs tailscale up --auth-key=... --ssh exactly once.
18. Resize a VM
Resize lets you change the vCPU count, memory, and rootfs size of a stopped VM.
18.1 Prerequisites
- The VM must be stopped — resize is rejected for running VMs
- vCPUs and memory can be increased or decreased as long as the requested values stay within the supported limits
- Rootfs size is grow-only — shrink requests are rejected
18.2 Resize
ssh srv stop demossh srv resize demo --cpus 4 --ram 8G --rootfs-size 20Gssh srv start demo
You can specify any combination of flags. Omitted flags keep the current value:
# Only increase RAMssh srv stop demossh srv resize demo --ram 16Gssh srv start demo
18.3 How it works
- vCPU count: stored in the instance record and applied on the next boot
- Memory: stored in the instance record and applied on the next boot
- Rootfs size: uses
resize2fsto grow the ext4 filesystem. The underlying file is expanded first, then the filesystem is grown. This operation only increases the filesystem — it never shrinks
Rootfs resize modifies the disk image in place. Take a backup first if you want a safety net.
18.4 Limits
| Dimension | Minimum | Maximum |
|---|---|---|
| vCPUs | 1 | 32 |
| Memory | 128 MiB | host limit |
| Rootfs | current size | host disk limit |
vCPU count must be 1 or an even number. CPU and memory changes are applied on the next boot.
19. View logs
srv provides two log sources for each VM: serial output and Firecracker VMM logs.
19.1 Serial log
Shows kernel boot, srv-bootstrap.service, tailscaled, and general guest console output:
ssh srv logs demossh srv logs demo serialssh srv logs -f demo serial
19.2 Firecracker log
Shows VMM lifecycle events, API requests, and microVM-level errors:
ssh srv logs demo firecrackerssh srv logs -f demo firecracker
19.3 Both logs at once (default)
Without a log source argument, ssh srv logs <name> shows the serial log by default.
19.4 Where logs live on the host
| Log | Path |
|---|---|
| Serial | SRV_DATA_DIR/instances/<name>/serial.log |
| Firecracker | SRV_DATA_DIR/instances/<name>/firecracker.log |
Both are append-only. Always check the newest lines first when debugging multiple attempts against the same instance name.
19.5 Systemd logs
The VM runner and network helper also log to journald:
sudo journalctl -u srv-vm-runner -fsudo journalctl -u srv-net-helper -fsudo journalctl -u srv -f
19.6 Debugging a failed VM
When a VM is stuck or has failed:
ssh srv inspect <name>— control-plane view, state, and recorded eventsssh srv logs <name> serial— guest boot and bootstrap errorsssh srv logs <name> firecracker— VMM errorsjournalctl -u srv-vm-runner --no-pager— jailer and stop-time cleanup failures- Check cgroup limits:
cat /sys/fs/cgroup/firecracker-vms/<name>/memory.max
20. Sandboxed AI coding agent
srv makes it straightforward to run AI coding agents in isolated microVMs. Each VM gets its own cgroup limits, per-instance Tailscale identity, and an optional Zen API proxy that injects the host's API key without exposing it inside the guest.
20.1 Create a VM for an agent
ssh srv new agent-1 --cpus 4 --ram 8G --rootfs-size 30G
Wait for it to report ready:
ssh srv inspect agent-1
Look for state: ready and a tailscale-ip.
20.2 Zen API proxy
When SRV_ZEN_API_KEY is configured on the host, srv binds a per-instance HTTP proxy on the guest's gateway IP and port 11434 (configurable via SRV_ZEN_GATEWAY_PORT). The proxy:
- Only accepts requests from that VM's guest IP
- Forwards
/v1/...requests to the upstream OpenCode Zen API with the host key injected - The guest bootstrap writes
/root/.config/opencode/opencode.jsonand Pi config under/root/.pi/agent/pointing at this gateway
This means the agent inside the VM can use opencode, pi, or any OpenAI-compatible client against http://<gateway-ip>:11434/v1 without ever seeing the real API key.
20.3 Connect the agent
ssh root@agent-1
The preinstalled opencode and pi CLIs are already configured to target the per-VM gateway. If you are using a different agent framework, point its API client at:
http://:11434/v1
The gateway IP is the default route inside the VM. You can read it from the inspect output under host-addr.
20.4 Resource limits
Each VM runs in its own cgroup v2 leaf with:
cpu.max— capped at the vCPU countmemory.max— capped at the requested RAMmemory.swap.max— set to 0 (no swap)pids.max— default 512, configurable viaSRV_VM_PIDS_MAX
This prevents a misbehaving agent from consuming the entire host.
20.5 Clean up
ssh srv delete agent-1
20.6 Multiple agents
Create as many agent VMs as the host can hold. Each gets independent networking, identity, and resource limits:
ssh srv new agent-2 --cpus 2 --ram 4Gssh srv new agent-3 --cpus 2 --ram 4G
Use ssh srv status to check remaining host capacity before creating more.
21. Dev/test environment
srv's reflink-based cloning makes it fast to spin up a VM from a base image, modify it, and then use backup/restore to reset it to a known-good state.
21.1 Create a dev VM
ssh srv new dev --cpus 4 --ram 8G --rootfs-size 30G
21.2 Install your toolchain
ssh root@dev# Inside the VMpacman -Syupacman -S nodejs npm# ... set up your project ...
21.3 Back up the clean state
Once the VM is configured the way you like, take a checkpoint before making risky changes:
ssh srv stop devssh srv backup create devssh srv start dev
21.4 Wipe and reset
After a bad experiment:
ssh srv stop devssh srv backup list devssh srv restore dev ssh srv start dev
The restore rolls the rootfs back to the exact state captured at backup time. This is fast because it replaces the writable disk image rather than copying file by file.
21.5 Repeat
You can create multiple backups at different points:
ssh srv stop devssh srv backup create dev # backup 1: cleanssh srv start dev# ... make changes ...ssh srv stop devssh srv backup create dev # backup 2: with toolchainssh srv start dev
Then restore whichever snapshot you need:
ssh srv stop devssh srv restore dev ssh srv start dev
21.6 Key constraints
- Backups and restores only work on stopped VMs
- Backups are tied to the original VM record — they cannot be restored onto a differently created VM even if the name is reused
- Restore is in-place: it overwrites the VM's current rootfs with the backup's rootfs
22. Isolated service
Each srv VM runs in its own cgroup v2 leaf with its own /30 network subnet, TAP device, and Tailscale identity. This makes it straightforward to run services that need full isolation — container runtimes, network services, databases — without sharing the host's PID, network, or filesystem namespace.
22.1 Run a database
ssh srv new db --cpus 2 --ram 4G --rootfs-size 20G
Once the VM is ready:
ssh root@dbpacman -S postgresql# ... configure and start postgresql ...
The database is now reachable at its Tailscale IP from any other machine on the tailnet.
22.2 Run Docker workloads
The guest image includes Docker and docker-compose:
ssh srv new builder --cpus 4 --ram 8G --rootfs-size 40G
ssh root@builderdocker run ...docker compose up -d
The overlay and br_netfilter kernel modules are preloaded, and nftables supports both IPv4 and IPv6 families.
22.3 Per-VM resource isolation
Each VM is enforced by cgroup v2:
| Resource | Limit |
|---|---|
| CPU | vCPU count (advisory, allows overcommit) |
| Memory | Requested RAM, no swap |
| PIDs | 512 by default (SRV_VM_PIDS_MAX) |
You can verify the live cgroup limits:
cat /sys/fs/cgroup/firecracker-vms//cpu.max cat /sys/fs/cgroup/firecracker-vms//memory.max cat /sys/fs/cgroup/firecracker-vms//memory.swap.max cat /sys/fs/cgroup/firecracker-vms//pids.max
22.4 Per-VM networking
Each VM gets its own /30 subnet, TAP device, and NAT rules. VMs cannot reach each other's private networks. They can reach the host and the internet through MASQUERADE rules.
From any tailnet machine, you can SSH directly to the VM's Tailscale name or IP — no port forwarding needed.
22.5 Clean separation
When the service is no longer needed:
ssh srv delete db
The rootfs, TAP device, cgroup, jailer workspace, and iptables rules are all cleaned up.
23. Throwaway debug VM
One of the most immediate uses for srv is spinning up an isolated Linux environment where you can install packages, run risky commands, or reproduce a bug — then delete it without any trace on the host.
23.1 Create
ssh srv new debug-vm
By default the VM gets 1 vCPU, 1 GiB of RAM, and a 10 GiB rootfs. Adjust if you need more:
ssh srv new debug-vm --cpus 2 --ram 4G --rootfs-size 30G
23.2 Watch it boot
ssh srv logs -f debug-vm serial
You will see the kernel boot, srv-bootstrap.service set up networking and Tailscale, and tailscale up --ssh complete. Once inspect reports state: ready:
ssh srv inspect debug-vm
Look for the tailscale-name and tailscale-ip fields.
23.3 Connect and use
ssh root@debug-vm
The guest image comes with Docker, Go, Neovim, Git, perf, valgrind, and common development tools preinstalled. Install anything else with pacman -S.
23.4 Clean up
When you are done:
ssh srv delete debug-vm
This removes the VM's rootfs, runtime directory, TAP device, cgroup, and jailer workspace. No trace remains on the host.
23.5 Tips
- Use
-- --json inspect <name>to get machine-readable output for scripting - The serial log under
ssh srv logs <name> serialis append-only — always check the newest lines - If a VM gets stuck in
provisioningorfailed, checkssh srv inspect <name>for thelast_errorfield andssh srv logs <name> firecrackerfor VMM errors
24. Networking overview
Every srv VM gets its own isolated network stack: a dedicated /30 subnet, TAP device, and NAT rules. Guest egress is routed through the host's outbound interface after MASQUERADE.
24.1 How it works
Internet │ │ (host outbound interface) │┌───┴─────────────────────┐│ Linux host ││ ┌──────────────────┐ ││ │ iptables MASQ │ ││ │ + FORWARD rules │ ││ └────────┬─────────┘ ││ │ ││ ┌────────┴─────────┐ ││ │ TAP device │ ││ │ (per-VM /30) │ ││ └────────┬─────────┘ ││ │ ││ ┌────────┴─────────┐ ││ │ Firecracker VM │ ││ │ gateway = .1 │ ││ │ guest = .2 │ ││ └──────────────────┘ │└─────────────────────────┘
When ssh srv new demo runs, the network helper:
- Allocates the next free
/30fromSRV_VM_NETWORK_CIDR(default172.28.0.0/16) - Creates a TAP device for the VM
- Installs MASQUERADE and FORWARD rules for guest egress
- Configures the gateway address on the host side
The VM's bootstrap configures the guest interface with:
- Gateway:
SRV_DATA_DIR-derived host address (first usable IP in the/30) - Guest IP: second usable IP in the
/30 - DNS: configurable via
SRV_VM_DNS(default1.1.1.1, 1.0.0.1)
24.2 Tailscale integration
Each VM gets its own Tailscale identity. The control plane mints a one-off auth key and injects it through Firecracker MMDS metadata. The guest bootstrap service:
- Reads the MMDS payload
- Starts
tailscaled - Runs
tailscale up --auth-key=... --sshon the first authenticated boot - Persists
tailscaledstate for warm reboots
This means any machine on the tailnet can reach the VM by its Tailscale name or IP — no port forwarding needed.
24.2.1 SSH access
Guests expose SSH through Tailscale's --ssh flag, so ssh root@<tailscale-name> works from any tailnet machine. Per-user OpenSSH keys are not injected — Tailscale SSH handles authentication based on tailnet identity.
24.3 Configuration
| Variable | Default | Description |
|---|---|---|
SRV_VM_NETWORK_CIDR |
172.28.0.0/16 |
IPv4 network reserved for VM /30 allocations |
SRV_VM_DNS |
1.1.1.1,1.0.0.1 |
Comma-separated guest nameservers |
SRV_OUTBOUND_IFACE |
auto-detected | Optional override for the host interface used for NAT |
24.4 IPv4 forwarding
Guest NAT depends on IP forwarding:
sudo tee /etc/sysctl.d/90-srv-ip-forward.conf >/dev/null <<'EOF'net.ipv4.ip_forward = 1EOFsudo sysctl --system
This must stay enabled. Disabling it breaks guest egress.
24.5 Network cleanup
When a VM is deleted, the network helper removes:
- The TAP device
- The MASQUERADE rule
- The FORWARD rule
- The gateway address from the host interface
24.6 Host-side API gateways
srv can also expose host-side HTTP gateways on each VM's gateway IP:
- The Zen gateway on
SRV_ZEN_GATEWAY_PORTproxies/v1/...to the configured OpenCode Zen upstream with the host API key injected. - The generic integration gateway on
SRV_INTEGRATION_GATEWAY_PORTproxies/integrations/<name>/...to operator-defined HTTP integrations with host-managed auth or headers injected.
Both gateway types only accept requests from the owning guest IP. See Zen gateway and HTTP integrations.
25. Zen gateway
When SRV_ZEN_API_KEY is configured on the host, srv sets up a per-instance HTTP proxy that allows guest VMs to reach the OpenCode Zen API without storing the real API key inside the guest.
25.1 How it works
┌───────────────────────────────────────────────┐│ Host ││ ││ ┌───────────────┐ ┌────────────────────┐ ││ │ srv control │ │ Zen gateway │ ││ │ plane │ │ :11434 on │ ││ │ │ │ gateway IP │ ││ └───────────────┘ │ │ ││ │ injects │ ││ │ SRV_ZEN_API_KEY │ ││ │ into Authorization │ ││ │ header │ ││ └─────────┬──────────┘ ││ │ ││ ┌────────────┴──────────┐ ││ │ upstream: │ ││ │ opencode.ai/zen/v1 │ ││ └───────────────────────┘ │└───────────────────────────────────────────────┘ │ │ /30 network │ ┌─────────┴─────────┐ │ Guest VM │ │ │ │ opencode -> │ │ http://gateway │ │ :11434/v1 │ └───────────────────┘
For each VM, srv binds an HTTP proxy on the VM's host/gateway IP address and the configured SRV_ZEN_GATEWAY_PORT (default 11434). The proxy:
- Only accepts requests from that VM's guest IP
- Forwards
/v1/...requests to the upstream Zen API - Injects the host's
SRV_ZEN_API_KEYinto theAuthorizationheader - The real key never leaves the host
25.2 Guest bootstrap
When the Zen gateway is enabled, the guest srv-bootstrap.service writes /root/.config/opencode/opencode.json targeting the per-VM gateway:
{ "provider": "opencode", "apiKey": "local-placeholder", "baseURL": "http://:11434/v1" }
The apiKey is a local placeholder only so OpenCode keeps Zen's paid model catalog visible — the real credential still lives only on the host and is injected by the proxy.
Bootstrap also writes Pi config under /root/.pi/agent/ so the preinstalled pi CLI uses the same gateway by default:
{ "providers": { "opencode": { "baseUrl": "http://:11434/v1", "apiKey": "srv-zen-gateway" } }}
When the gateway is disabled (no SRV_ZEN_API_KEY), bootstrap removes those managed default config files.
25.3 Configuration
| Variable | Default | Description |
|---|---|---|
SRV_ZEN_API_KEY |
(empty) | OpenCode Zen API key. When set, enables per-VM gateways. |
SRV_ZEN_BASE_URL |
https://opencode.ai/zen |
Upstream Zen API base URL |
SRV_ZEN_GATEWAY_PORT |
11434 |
TCP port for each VM's gateway proxy |
25.4 Using the gateway from the guest
The preinstalled opencode and pi CLIs work out of the box. For other agents or HTTP clients:
# Inside the VMcurl http://$(ip route show default | awk '{print $3}'):11434/v1/models
25.5 Disabling the gateway
Remove or leave SRV_ZEN_API_KEY unset. After the next guest boot, the bootstrap service will remove the managed OpenCode and Pi config files.
26. srv Cheatsheet
Quick reference for the srv control plane.
26.1 Commands (via SSH)
ssh srv -- [--json] [args]
Use --json with the non-streaming instance and backup commands when you need machine-readable output.
| Command | Description |
|---|---|
new <name> |
Create new VM with optional --cpus, --ram, --rootfs-size |
new <name> --integration <name> |
Create a VM and enable one or more existing integrations (admin only) |
list |
Show visible VMs (all for admins, own for regular users) |
top [--interval DURATION] |
Live per-VM CPU, memory, disk, and network view; press q to exit |
status |
Admin-only host capacity and allocation summary |
inspect <name> |
Show VM details and status |
logs <name> |
View serial or firecracker logs |
start <name> |
Start a stopped VM |
stop <name> |
Stop VM (graceful shutdown) |
restart <name> |
Restart VM |
delete <name> |
Remove VM |
resize <name> |
Resize stopped VM (CPU/RAM up or down, rootfs grow-only) |
backup create <name> |
Create an in-place backup for a stopped VM |
backup list <name> |
List stored backups for a VM |
restore <name> <backup-id> |
Restore a stopped VM from one of its backups |
26.2 Integrations (admin only)
| Command | Description |
|---|---|
integration list |
List configured integrations |
integration add http <name> --target <url> ... |
Create an HTTP integration with host-managed auth or headers |
integration inspect <name> |
Show target, auth mode, headers, and timestamps |
integration enable <vm> <name> |
Enable an integration for a VM |
integration disable <vm> <name> |
Disable an integration for a VM |
integration list-enabled <vm> |
Show integrations enabled for a VM |
26.3 Quick Examples
# Create VMssh srv new demossh srv statusssh srv -- --json inspect demo# With sizingssh srv new demo --cpus 4 --ram 8G --rootfs-size 20G# Integrationsssh srv integration add http openai --target https://api.openai.com/v1 --bearer-env SRV_SECRET_OPENAI_PRODssh srv integration enable demo openaissh srv inspect demo# Resize (must be stopped)ssh srv stop demossh srv resize demo --cpus 4 --ram 8Gssh srv start demo# Backup and restore (VM must be stopped)ssh srv stop demossh srv backup create demossh srv backup list demossh srv restore demo # View logsssh srv logs demossh srv logs demo serialssh srv logs demo firecrackerssh srv logs -f demo serialssh srv logs -f demo firecracker# Live VM usagessh -t srv topssh -t srv top --interval 2s
26.4 Systemd Management
# Statussudo systemctl status srv srv-net-helper srv-vm-runner# Restartsudo systemctl stop srv srv-net-helper srv-vm-runnersleep 5sudo systemctl start srv-vm-runner srv-net-helper srv# View logssudo journalctl -u srv -fsudo journalctl -u srv-vm-runner -f
26.5 Smoke Test
sudo ./contrib/smoke/host-smoke.sh
26.6 Build Artifacts
# Binariesgo build ./cmd/srvgo build ./cmd/srv-net-helpergo build ./cmd/srv-vm-runner# Guest imagesudo OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.sh# Installsudo ./contrib/systemd/install.shsudoedit /etc/srv/srv.envsudo ./contrib/systemd/install.sh --enable-now
26.7 Key Env Variables
| Variable | Default | Description |
|---|---|---|
SRV_DATA_DIR |
/var/lib/srv |
State directory on the same reflink-capable filesystem as SRV_BASE_ROOTFS, for example btrfs or reflink-enabled xfs |
SRV_BASE_KERNEL |
- | Firecracker kernel path |
SRV_BASE_ROOTFS |
- | Base rootfs image on the same reflink-capable filesystem as SRV_DATA_DIR, such as btrfs or reflink-enabled xfs |
SRV_BASE_INITRD |
- | Optional initrd |
SRV_ALLOWED_USERS |
- | Comma-separated Tailscale allowlist |
SRV_ADMIN_USERS |
- | Cross-instance admin access |
SRV_GUEST_AUTH_TAGS |
- | Tags for guest auth keys |
TS_TAILNET |
- | Tailnet name |
SRV_FIRECRACKER_BIN |
/usr/bin/jailer |
Firecracker binary |
SRV_JAILER_BIN |
/usr/bin/jailer |
Jailer binary |
SRV_INTEGRATION_GATEWAY_PORT |
11435 |
Per-VM host-side HTTP integration gateway port |
SRV_SECRET_* |
- | Host-managed integration secret values referenced by name from the SSH API |
26.8 Backup & Restore
# Backupsudo systemctl stop srv srv-net-helper srv-vm-runnersudo tar -czf backup.tar.gz /etc/srv /var/lib/srv# Restoresudo ./contrib/systemd/install.sh# restore /etc/srv/srv.env and /var/lib/srvsudo systemctl daemon-reloadsudo systemctl enable --now srv-vm-runner srv-net-helper srvsudo ./contrib/smoke/host-smoke.sh
26.9 Debug Commands
# VM inspectionssh srv inspect # Recent logsssh srv logs serial ssh srv logs firecracker # System logsjournalctl -u srv-vm-runner | tail# Check cgroupscat /sys/fs/cgroup/firecracker-vms//cpu.max cat /sys/fs/cgroup/firecracker-vms//memory.max
26.10 Notes
- VM disks at:
SRV_DATA_DIR/instances/<name>/rootfs.img - VM backups live at:
SRV_DATA_DIR/backups/<name>/<backup-id>/ - Resize requires a stopped VM; CPU and RAM can go up or down within limits, but rootfs shrink is rejected
- Resize only works on stopped VMs
- Backup and restore only work on stopped VMs and only restore onto the original VM record, not a newly recreated VM with the same name
- Creators manage their own VMs; admins manage all VMs
- Integration commands are admin-only, and integration secrets are referenced by host env name rather than passed as raw SSH arguments
- Warm start/restart reuses tailscaled state
- Host reboot auto-restarts active VMs
27. Appendices
27.1 Command index
Command examples extracted from the docs. Each entry links back to the sections where it appears.
cat /sys/fs/cgroup/firecracker-vms/<name>/cpu.maxSeen in Isolated service, srv Cheatsheet, Troubleshootingcat /sys/fs/cgroup/firecracker-vms/<name>/memory.maxSeen in Isolated service, srv Cheatsheet, Troubleshootingcat /sys/fs/cgroup/firecracker-vms/<name>/memory.swap.maxSeen in Isolated service, Troubleshootingcat /sys/fs/cgroup/firecracker-vms/<name>/pids.maxSeen in Isolated service, Troubleshootingcat demo-backup.tar | ssh srv importSeen in Export and importcurl http://$(ip route show default | awk '{print $3}'):11434/v1/modelsSeen in Zen gatewaycurl http://$(ip route show default | awk '{print $3}'):11435/integrations/openai/modelsSeen in HTTP integrationscurl http://$(ip route show default | awk '{print $3}'):11435/integrations/vendor/pingSeen in HTTP integrationsdf -hSeen in Troubleshootingdf -h /var/lib/srvSeen in Troubleshootingdocker compose up -dSeen in Isolated servicedocker run ...Seen in Isolated servicego build ./cmd/srvSeen in Installation, srv Cheatsheetgo build ./cmd/srv-net-helperSeen in Installation, srv Cheatsheetgo build ./cmd/srv-vm-runnerSeen in Installation, srv Cheatsheetjournalctl -u srv-vm-runner | tailSeen in srv CheatsheetOUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.shSeen in Build a custom guest image, Installationpacman -S nodejs npmSeen in Dev/test environmentpacman -S postgresqlSeen in Isolated servicepacman -Sy --noconfirm archlinux-keyringSeen in Build a custom guest image, Installationpacman -SyuSeen in Dev/test environmentpacman -Syu --noconfirm arch-install-scripts base-devel bc e2fsprogs rsync curl systemdSeen in Build a custom guest image, InstallationROOTFS_SIZE=20G sudo OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.shSeen in Build a custom guest imagessh -t srv topSeen in srv Cheatsheetssh -t srv top --interval 2sSeen in srv Cheatsheetssh root@agent-1Seen in Sandboxed AI coding agentssh root@builderSeen in Isolated servicessh root@dbSeen in Isolated servicessh root@debug-vmSeen in Throwaway debug VMssh root@demoSeen in Walkthroughssh root@demo # install packages, configure servicesSeen in Backup and restoressh root@devSeen in Dev/test environmentssh srv -- --json inspect <name>Seen in Instance lifecyclessh srv -- --json inspect demoSeen in srv Cheatsheet, SSH command reference, Walkthroughssh srv -- --json listSeen in SSH command reference, Walkthroughssh srv -- --json statusSeen in SSH command referencessh srv -- [--json] <command> [args]Seen in srv Cheatsheetssh srv <command> [args]Seen in SSH command referencessh srv backup create demoSeen in Backup and restore, srv Cheatsheet, Walkthroughssh srv backup create devSeen in Dev/test environmentssh srv backup create dev # backup 1: cleanSeen in Dev/test environmentssh srv backup create dev # backup 2: with toolchainSeen in Dev/test environmentssh srv backup list demoSeen in Backup and restore, srv Cheatsheet, Walkthroughssh srv backup list devSeen in Dev/test environmentssh srv delete <name>Seen in Instance lifecyclessh srv delete agent-1Seen in Sandboxed AI coding agentssh srv delete dbSeen in Isolated servicessh srv delete debug-vmSeen in Throwaway debug VMssh srv delete demoSeen in Walkthroughssh srv export demoSeen in Export and importssh srv export demo > demo-backup.tarSeen in Export and importssh srv inspect <name>Seen in Instance lifecycle, srv Cheatsheet, Troubleshootingssh srv inspect agent-1Seen in Sandboxed AI coding agentssh srv inspect debug-vmSeen in Throwaway debug VMssh srv inspect demoSeen in HTTP integrations, srv Cheatsheet, Walkthroughssh srv integration add http openai --target https://api.openai.com/v1 --bearer-env SRV_SECRET_OPENAI_PRODSeen in HTTP integrations, srv Cheatsheetssh srv integration add http vendor --target https://vendor.example/api --header-env X-API-Key:SRV_SECRET_VENDOR_API_KEYSeen in HTTP integrationsssh srv integration enable demo openaiSeen in HTTP integrations, srv Cheatsheetssh srv integration enable demo vendorSeen in HTTP integrationsssh srv listSeen in Instance lifecycle, Walkthroughssh srv logs -f <name> firecrackerSeen in Instance lifecyclessh srv logs -f <name> serialSeen in Instance lifecyclessh srv logs -f debug-vm serialSeen in Throwaway debug VMssh srv logs -f demo firecrackerSeen in srv Cheatsheet, View logsssh srv logs -f demo serialSeen in srv Cheatsheet, View logs, Walkthroughssh srv logs <name>Seen in Instance lifecyclessh srv logs <name> firecrackerSeen in Instance lifecycle, srv Cheatsheet, Troubleshootingssh srv logs <name> serialSeen in Instance lifecycle, srv Cheatsheet, Troubleshootingssh srv logs demoSeen in srv Cheatsheet, View logs, Walkthroughssh srv logs demo firecrackerSeen in srv Cheatsheet, View logs, Walkthroughssh srv logs demo serialSeen in srv Cheatsheet, View logs, Walkthroughssh srv new <name>Seen in Instance lifecyclessh srv new <name> --cpus <n> --ram <size> --rootfs-size <size>Seen in Instance lifecyclessh srv new agent-1 --cpus 4 --ram 8G --rootfs-size 30GSeen in Sandboxed AI coding agentssh srv new agent-2 --cpus 2 --ram 4GSeen in Sandboxed AI coding agentssh srv new agent-3 --cpus 2 --ram 4GSeen in Sandboxed AI coding agentssh srv new builder --cpus 4 --ram 8G --rootfs-size 40GSeen in Isolated servicessh srv new db --cpus 2 --ram 4G --rootfs-size 20GSeen in Isolated servicessh srv new debug-vmSeen in Throwaway debug VMssh srv new debug-vm --cpus 2 --ram 4G --rootfs-size 30GSeen in Throwaway debug VMssh srv new demoSeen in Backup and restore, Introduction to srv, srv Cheatsheet, Walkthroughssh srv new demo --cpus 4 --ram 8G --rootfs-size 20GSeen in srv Cheatsheet, Walkthroughssh srv new dev --cpus 4 --ram 8G --rootfs-size 30GSeen in Dev/test environmentssh srv resize <name> --rootfs-size 20GSeen in Troubleshootingssh srv resize demo --cpus 4 --ram 8GSeen in srv Cheatsheetssh srv resize demo --cpus 4 --ram 8G --rootfs-size 20GSeen in Resize a VM, Walkthroughssh srv resize demo --ram 16GSeen in Resize a VMssh srv restart <name>Seen in Instance lifecyclessh srv restart demoSeen in Walkthroughssh srv restore demo <backup-id>Seen in Backup and restore, srv Cheatsheet, Walkthroughssh srv restore dev <backup-id>Seen in Dev/test environmentssh srv snapshot createSeen in Host snapshots, Operations runbookssh srv start <name>Seen in Instance lifecycle, Troubleshootingssh srv start demoSeen in Backup and restore, Resize a VM, srv Cheatsheet, Walkthroughssh srv start devSeen in Dev/test environmentssh srv statusSeen in Instance lifecycle, srv Cheatsheetssh srv stop <name>Seen in Instance lifecycle, Troubleshootingssh srv stop demoSeen in Backup and restore, Export and import, Resize a VM, srv Cheatsheet, Walkthroughssh srv stop devSeen in Dev/test environmentssh srv-a export demo | ssh srv-b importSeen in Export and import, Operations runbook, Walkthroughssh srv-b start demoSeen in Export and importsudo ./contrib/smoke/host-smoke.shSeen in Installation, Operations runbook, Running as a daemon, srv Cheatsheet, Troubleshootingsudo ./contrib/systemd/install.shSeen in Installation, Operations runbook, srv Cheatsheetsudo ./contrib/systemd/install.sh --enable-nowSeen in Installation, srv Cheatsheetsudo btrfs send SRV_DATA_DIR/.snapshots/<timestamp> | ssh backup-host btrfs receive /backup/srv/Seen in Host snapshotssudo iptables -L FORWARD -nSeen in Troubleshootingsudo iptables -t nat -L -nSeen in Troubleshootingsudo journalctl -u srv -fSeen in Running as a daemon, srv Cheatsheet, View logssudo journalctl -u srv-net-helper -fSeen in View logssudo journalctl -u srv-vm-runner -fSeen in Running as a daemon, srv Cheatsheet, Troubleshooting, View logssudo KERNEL_JOBS=2 OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.shSeen in Build a custom guest imagesudo KERNEL_VERSION=6.1.167 OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.shSeen in Build a custom guest imagesudo OUTPUT_DIR=/var/lib/srv/images/arch-base ./images/arch-base/build.shSeen in Build a custom guest image, Installation, srv Cheatsheetsudo podman run --rm --privileged --network host -v "$PWD":/work -v /var/lib/srv/images/arch-base:/var/lib/srv/images/arch-base -w /work docker.io/library/archlinux:latest bash -lc 'Seen in Build a custom guest image, Installationsudo sysctl --systemSeen in Installation, Networking overview, Operations runbooksudo sysctl -w net.ipv4.ip_forward=1Seen in Troubleshootingsudo systemctl daemon-reloadSeen in Operations runbook, srv Cheatsheetsudo systemctl enable --now srv-vm-runner srv-net-helper srvSeen in Operations runbook, srv Cheatsheetsudo systemctl start srv-vm-runner srv-net-helper srvSeen in Operations runbook, Running as a daemon, srv Cheatsheetsudo systemctl status srv srv-net-helper srv-vm-runnerSeen in Running as a daemon, srv Cheatsheetsudo systemctl status srv-net-helperSeen in Troubleshootingsudo systemctl stop srv srv-net-helper srv-vm-runnerSeen in Operations runbook, Running as a daemon, srv Cheatsheetsudo tar --xattrs --acls --numeric-owner --ignore-failed-read -C / -czf /var/tmp/srv-backup-$(date -u +%Y%m%dT%H%M%SZ).tar.gz etc/srv var/lib/srv etc/systemd/system/srv.service.d etc/systemd/system/srv-net-helper.service.d etc/systemd/system/srv-vm-runner.service.d etc/systemd/system.controlSeen in Operations runbooksudo tar -czf backup.tar.gz /etc/srv /var/lib/srvSeen in srv Cheatsheetsudo tee /etc/sysctl.d/90-srv-ip-forward.conf >/dev/null <<'EOF'Seen in Installation, Networking overview, Operations runbooksudoedit /etc/srv/srv.envSeen in Installation, srv Cheatsheetsysctl net.ipv4.ip_forwardSeen in Troubleshooting
27.2 Environment variables
Environment variables and build-time overrides mentioned across the handbook, indexed back to the sections that reference them.
ARTIFACT_ROOTSeen in TroubleshootingENV_PATHSeen in Running as a daemon, TroubleshootingGUEST_SSH_READY_TIMEOUTSeen in TroubleshootingINSTANCE_NAMESeen in Running as a daemon, TroubleshootingKEEP_FAILEDSeen in Running as a daemon, TroubleshootingKERNEL_JOBSSeen in Build a custom guest imageKERNEL_VERSIONSeen in Build a custom guest imageOUTPUT_DIRSeen in Build a custom guest image, Installation, srv CheatsheetREADY_TIMEOUT_SECONDSSeen in Running as a daemon, TroubleshootingROOTFS_SIZESeen in Build a custom guest imageSMOKE_SSH_HOSTSeen in Running as a daemon, TroubleshootingSRV_ADMIN_USERSSeen in Authorization, Configuration reference, srv CheatsheetSRV_ALLOWED_USERSSeen in Authorization, Configuration reference, srv CheatsheetSRV_BASE_INITRDSeen in Build a custom guest image, Configuration reference, Export and import, Instance lifecycle, Operations runbook, srv CheatsheetSRV_BASE_KERNELSeen in Build a custom guest image, Configuration reference, Export and import, Installation, Instance lifecycle, Operations runbook, srv CheatsheetSRV_BASE_ROOTFSSeen in Build a custom guest image, Configuration reference, Installation, Operations runbook, srv CheatsheetSRV_DATA_DIRSeen in Architecture, Backup and restore, Configuration reference, Host snapshots, Installation, Introduction to srv, Networking overview, Operations runbook, srv Cheatsheet, SSH command reference, Troubleshooting, View logsSRV_EXTRA_KERNEL_ARGSSeen in Configuration referenceSRV_FIRECRACKER_BINSeen in Configuration reference, Operations runbook, Running as a daemon, srv CheatsheetSRV_GUEST_AUTH_EXPIRYSeen in Configuration referenceSRV_GUEST_AUTH_TAGSSeen in Configuration reference, Installation, srv Cheatsheet, TroubleshootingSRV_GUEST_READY_TIMEOUTSeen in Configuration referenceSRV_GUEST_TAILSCALE_CONTROL_URLSeen in Configuration referenceSRV_HOSTNAMESeen in Architecture, Configuration referenceSRV_INTEGRATION_GATEWAY_PORTSeen in Configuration reference, HTTP integrations, Networking overview, srv CheatsheetSRV_JAILER_BASE_DIRSeen in Configuration reference, Operations runbook, TroubleshootingSRV_JAILER_BINSeen in Configuration reference, Operations runbook, Running as a daemon, srv CheatsheetSRV_LISTEN_ADDRSeen in Architecture, Configuration referenceSRV_LOG_LEVELSeen in Configuration referenceSRV_NET_HELPER_SOCKETSeen in Configuration referenceSRV_OUTBOUND_IFACESeen in Configuration reference, Networking overviewSRV_SECRET_Seen in Configuration reference, HTTP integrations, srv CheatsheetSRV_SECRET_BARSeen in HTTP integrations, SSH command referenceSRV_SECRET_FOOSeen in HTTP integrations, SSH command referenceSRV_SECRET_OPENAI_PRODSeen in Configuration reference, HTTP integrations, srv CheatsheetSRV_SECRET_VENDOR_API_KEYSeen in HTTP integrationsSRV_VM_DNSSeen in Configuration reference, Networking overviewSRV_VM_MEMORY_MIBSeen in Configuration referenceSRV_VM_NETWORK_CIDRSeen in Configuration reference, Networking overviewSRV_VM_PIDS_MAXSeen in Configuration reference, Isolated service, Sandboxed AI coding agentSRV_VM_RUNNER_SOCKETSeen in Configuration referenceSRV_VM_VCPUSSeen in Configuration referenceSRV_ZEN_API_KEYSeen in Architecture, Configuration reference, Guest image reference, Sandboxed AI coding agent, Zen gatewaySRV_ZEN_BASE_URLSeen in Configuration reference, Zen gatewaySRV_ZEN_GATEWAY_PORTSeen in Architecture, Configuration reference, Networking overview, Sandboxed AI coding agent, Zen gatewayTS_API_BASE_URLSeen in Configuration referenceTS_AUTHKEYSeen in Configuration reference, InstallationTS_CLIENT_IDSeen in Configuration reference, InstallationTS_CLIENT_SECRETSeen in Configuration reference, InstallationTS_CONTROL_URLSeen in Configuration referenceTS_TAILNETSeen in Configuration reference, Installation, srv Cheatsheet