Skip to Content
DocsCloud OSServer ModeFleet β€” Shared Object Storage

Shared Object Storage πŸ”’

Paid feature. Requires a Pro plan or higher. The single-node object store shipped in Phase 6 stays free on every plan. See Orbit Pro for the tier matrix.

Phase 10C upgrades the built-in MinIO into its multi-node erasure-coded mode so a drive failure β€” or even a whole-node failure β€” doesn’t lose your buckets.

Requirements

  • β‰₯ 4 nodes participating. MinIO recommends 4 as the sweet spot for erasure parity.
  • Uniform drive count on every node. If node A has 2 drives, every other node must also expose 2 drives. Quazzar rejects heterogeneous fleets at bootstrap time with a friendly error rather than at MinIO boot with a cryptic one.
  • Stable DNS between nodes. The generated MINIO_VOLUMES argv embeds hostnames, so a rotation that re-IPs the nodes is transparent to MinIO.

Bootstrapping

  1. Control Panel β†’ Fleet β†’ Shared storage.
  2. Click Add node… and pick one of your fleet instances; a row appears with a default drive list (/var/lib/quazzar/objects/disk1). Edit it to match the actual drive mounts on that node.
  3. Repeat for at least four nodes.
  4. Click Bootstrap cluster.

The CP computes a deterministic MINIO_VOLUMES value β€” the exact same argv every node must boot with. The UI shows it, explains the fault-tolerance (half the total drive count), and offers a copy button so you can feed it into your existing config-management tooling.

Example output for four nodes Γ— two drives each:

http://node-a.example/data/disk1 http://node-a.example/data/disk2 \ http://node-b.example/data/disk1 http://node-b.example/data/disk2 \ http://node-c.example/data/disk1 http://node-c.example/data/disk2 \ http://node-d.example/data/disk1 http://node-d.example/data/disk2

Set that string as the node’s MINIO_VOLUMES environment variable (or pass ClusterVolumes to the objstore manager at boot) and restart the Quazzar process. MinIO picks up the cluster mode on next launch.

Gotchas

  • MinIO requires identical argv on every node. Regenerate the plan after any drive addition and redeploy to every participant before re-enabling writes.
  • Drive loss is survivable up to the parity count (shown next to the Bootstrap cluster button); losing more than that is a data-loss event. Size your fleet accordingly.
  • A full operator runbook β€” including the live Add / Decommission dance β€” is shipping in a follow-up. Today the page only produces the plan; distribution is manual.