Storage Pools
Server Mode lets you build, inspect, and snapshot storage pools directly from the Cloud OS UI. Four backends are supported out of the box:
- LVM — traditional volume groups and logical volumes
- mdadm — Linux software RAID (
/dev/mdNarrays) - btrfs — copy-on-write subvolumes and snapshots
- ZFS — zpools, datasets, and snapshots
The page lives under Disks → Pools in the sidebar. The same page hosts three tabs:
| Tab | What it shows |
|---|---|
| SMART | Per-disk SMART health, partitions, and filesystem usage (unchanged from previous releases). |
| Pools | Every pool on the node, grouped by backend. |
| Snapshots | Per-pool snapshot list with “Snapshot now” + delete actions. |
Backend detection
On boot, Cloud OS probes $PATH for each backend’s CLI:
| Backend | Probe |
|---|---|
| LVM | vgs + lvs |
| mdadm | mdadm |
| btrfs | btrfs |
| ZFS | zfs |
Backends whose CLI is missing appear disabled in the Create pool dialog. Install the matching package (lvm2, mdadm, btrfs-progs, zfsutils-linux) and restart quazzard to enable them.
Creating a pool
- Open Disks → Pools and click Create pool.
- Pick a backend. Disabled options mean the CLI isn’t installed.
- Give the pool a name:
- LVM — volume-group name, e.g.
vg0. - mdadm — array device path, e.g.
/dev/md0. - ZFS — zpool name, e.g.
tank.
- LVM — volume-group name, e.g.
- For mdadm / ZFS, choose a RAID level. The UI only exposes the safe, common levels (RAID 0/1/5/6/10 for mdadm;
mirror,raidz,raidz2,raidz3for ZFS). - Select two or more eligible block devices. Removable and read-only disks are filtered out.
- Click Create pool.
The underlying command (vgcreate, mdadm --create, or zpool create) runs through the same sysctl confirm-token flow every other Server Mode actuator uses, and is recorded in the sysctl audit log.
btrfs pool creation (
mkfs.btrfs) is not yet whitelisted, so the UI does not offer it. You can still manage btrfs subvolumes and snapshots on a filesystem that was created with a separatemkfsinvocation.
Snapshots
Under the Snapshots tab, pick a pool and use the form to take a new snapshot:
- LVM — provide the logical-volume name and a size reservation (e.g.
1G). The operation runslvcreate -L <size> -s -n <name> <vg>/<lv>. - btrfs — provide the source subvolume path and an absolute destination path (by convention
<mount>/.snapshots/<name>). - ZFS — provide just the snapshot name; the pool/dataset is picked from the selector.
Existing snapshots are listed with their parent pool and, where available, their size. “Restore/Delete” currently opens a confirm modal that deletes the snapshot. True roll-back (lvconvert --merge, zfs rollback) will be added as the sysctl whitelist grows.
Snapshot policies
You can also schedule automated snapshots per pool by creating a snapshot policy at POST /api/storage/policies. A policy carries:
| Field | Meaning |
|---|---|
backend | lvm, mdadm, btrfs, or zfs |
pool_name | The pool / volume group / dataset |
schedule_cron | Standard cron expression or @hourly, @daily, @weekly, @monthly |
retention_count | How many snapshots to keep (older ones are pruned) |
enabled | Whether the policy is active |
Policies are persisted in the snapshot_policies table. The cron executor lands as part of the Server Mode phase 7A (Jobs & Scheduler) work; until then, policies are stored and surfaced in the API but do not fire automatically.
REST API
All routes live under /api/storage and require a session-authed user. Mutations additionally go through /api/sysctl/confirm + /api/sysctl/execute — the frontend uses useSysctl() for this transparently.
| Method | Path | Purpose |
|---|---|---|
| GET | /api/storage/backends | Which backends are available. |
| GET | /api/storage/pools | All pools across every backend. |
| POST | /api/storage/pools | Create a new pool. Body: {backend, name, devices, raid_level?}. |
| GET | /api/storage/snapshots/{backend}:{pool} | Snapshots for a pool. |
| POST | /api/storage/snapshots/{backend}:{pool} | Take a snapshot. |
| DELETE | /api/storage/snapshots/{backend}:{pool}/{name} | Destroy a snapshot. |
| GET | /api/storage/policies | Snapshot policies. |
| POST | /api/storage/policies | Create a snapshot policy. |
| PUT | /api/storage/policies/{id} | Update a snapshot policy. |
| DELETE | /api/storage/policies/{id} | Delete a snapshot policy. |
Security model
Every mutating operation funnels through the shared internal/sysctl package, which:
- Rejects any action not on the static whitelist (e.g.
lvm.vgcreate,mdadm.create,zfs.pool_create). - Validates the target against a per-action regex so shell metacharacters can never land in
argv. - Requires a single-use confirm-token minted by
/api/sysctl/confirm. - Records every call to the
sysctl_audittable, including stdout/stderr snippets, exit code, and duration.
This means a rogue MCP prompt cannot drive vgcreate/mdadm --create/zpool create without a human clicking Confirm in the UI first.