Skip to Content

Storage Pools

Server Mode lets you build, inspect, and snapshot storage pools directly from the Cloud OS UI. Four backends are supported out of the box:

  • LVM — traditional volume groups and logical volumes
  • mdadm — Linux software RAID (/dev/mdN arrays)
  • btrfs — copy-on-write subvolumes and snapshots
  • ZFS — zpools, datasets, and snapshots

The page lives under Disks → Pools in the sidebar. The same page hosts three tabs:

TabWhat it shows
SMARTPer-disk SMART health, partitions, and filesystem usage (unchanged from previous releases).
PoolsEvery pool on the node, grouped by backend.
SnapshotsPer-pool snapshot list with “Snapshot now” + delete actions.

Backend detection

On boot, Cloud OS probes $PATH for each backend’s CLI:

BackendProbe
LVMvgs + lvs
mdadmmdadm
btrfsbtrfs
ZFSzfs

Backends whose CLI is missing appear disabled in the Create pool dialog. Install the matching package (lvm2, mdadm, btrfs-progs, zfsutils-linux) and restart quazzard to enable them.

Creating a pool

  1. Open Disks → Pools and click Create pool.
  2. Pick a backend. Disabled options mean the CLI isn’t installed.
  3. Give the pool a name:
    • LVM — volume-group name, e.g. vg0.
    • mdadm — array device path, e.g. /dev/md0.
    • ZFS — zpool name, e.g. tank.
  4. For mdadm / ZFS, choose a RAID level. The UI only exposes the safe, common levels (RAID 0/1/5/6/10 for mdadm; mirror, raidz, raidz2, raidz3 for ZFS).
  5. Select two or more eligible block devices. Removable and read-only disks are filtered out.
  6. Click Create pool.

The underlying command (vgcreate, mdadm --create, or zpool create) runs through the same sysctl confirm-token flow every other Server Mode actuator uses, and is recorded in the sysctl audit log.

btrfs pool creation (mkfs.btrfs) is not yet whitelisted, so the UI does not offer it. You can still manage btrfs subvolumes and snapshots on a filesystem that was created with a separate mkfs invocation.

Snapshots

Under the Snapshots tab, pick a pool and use the form to take a new snapshot:

  • LVM — provide the logical-volume name and a size reservation (e.g. 1G). The operation runs lvcreate -L <size> -s -n <name> <vg>/<lv>.
  • btrfs — provide the source subvolume path and an absolute destination path (by convention <mount>/.snapshots/<name>).
  • ZFS — provide just the snapshot name; the pool/dataset is picked from the selector.

Existing snapshots are listed with their parent pool and, where available, their size. “Restore/Delete” currently opens a confirm modal that deletes the snapshot. True roll-back (lvconvert --merge, zfs rollback) will be added as the sysctl whitelist grows.

Snapshot policies

You can also schedule automated snapshots per pool by creating a snapshot policy at POST /api/storage/policies. A policy carries:

FieldMeaning
backendlvm, mdadm, btrfs, or zfs
pool_nameThe pool / volume group / dataset
schedule_cronStandard cron expression or @hourly, @daily, @weekly, @monthly
retention_countHow many snapshots to keep (older ones are pruned)
enabledWhether the policy is active

Policies are persisted in the snapshot_policies table. The cron executor lands as part of the Server Mode phase 7A (Jobs & Scheduler) work; until then, policies are stored and surfaced in the API but do not fire automatically.

REST API

All routes live under /api/storage and require a session-authed user. Mutations additionally go through /api/sysctl/confirm + /api/sysctl/execute — the frontend uses useSysctl() for this transparently.

MethodPathPurpose
GET/api/storage/backendsWhich backends are available.
GET/api/storage/poolsAll pools across every backend.
POST/api/storage/poolsCreate a new pool. Body: {backend, name, devices, raid_level?}.
GET/api/storage/snapshots/{backend}:{pool}Snapshots for a pool.
POST/api/storage/snapshots/{backend}:{pool}Take a snapshot.
DELETE/api/storage/snapshots/{backend}:{pool}/{name}Destroy a snapshot.
GET/api/storage/policiesSnapshot policies.
POST/api/storage/policiesCreate a snapshot policy.
PUT/api/storage/policies/{id}Update a snapshot policy.
DELETE/api/storage/policies/{id}Delete a snapshot policy.

Security model

Every mutating operation funnels through the shared internal/sysctl package, which:

  1. Rejects any action not on the static whitelist (e.g. lvm.vgcreate, mdadm.create, zfs.pool_create).
  2. Validates the target against a per-action regex so shell metacharacters can never land in argv.
  3. Requires a single-use confirm-token minted by /api/sysctl/confirm.
  4. Records every call to the sysctl_audit table, including stdout/stderr snippets, exit code, and duration.

This means a rogue MCP prompt cannot drive vgcreate/mdadm --create/zpool create without a human clicking Confirm in the UI first.