Overview & Terminology¶
All work zrepl does is performed by the zrepl daemon which is configured in a single YAML configuration file loaded on startup. The following paths are considered:
- If set, the location specified via the global
zrepl configcheck subcommand can be used to validate the configuration.
The command will output nothing and exit with zero status code if the configuration is valid.
The error messages vary in quality and usefulness: please report confusing config errors to the tracking issue #155.
Full example configs such as in the Tutorial or the config/samples/ directory might also be helpful.
However, copy-pasting examples is no substitute for reading documentation!
Config File Structure¶
global: ... jobs: - name: backup type: push - ...
zrepl is confgured using a single YAML configuration file with two main sections:
global section is filled with sensible defaults and is covered later in this chapter.
jobs section is a list of jobs which we are goind to explain now.
Jobs & How They Work Together¶
A job is the unit of activity tracked by the zrepl daemon.
type of a job determines its role in a replication setup and in snapshot management.
Jobs are identified by their
name, both in log files and the
zrepl status command.
Replication always happens between a pair of jobs: one is the active side, and one the passive side. The active side connects to the passive side using a transport and starts executing the replication logic. The passive side responds to requests from the active side after checking its persmissions.
The following table shows how different job types can be combined to achieve both push and pull mode setups. Note that snapshot-creation denoted by “(snap)” is orthogonal to whether a job is active or passive.
|Setup name||active side||passive side||use case|
with local transport
|Snap & prune-only||
How the Active Side Works¶
- Wakeup because of finished snapshotting (
pushjob) or pull interval ticker (
- Connect to the corresponding passive side using a transport and instantiate an RPC client.
- Replicate data from the sending to the receiving side (see below).
- Prune on sender & receiver.
The progress of the active side can be watched live using the
zrepl status subcommand.
How the Passive Side Works¶
The passive side (sink and source) waits for connections from the corresponding active side,
using the transport listener type specified in the
serve field of the job configuration.
Each transport listener provides a client’s identity to the passive side job.
It uses the client identity for access control:
sinkjob maps requests from different client identities to their respective sub-filesystem tree
sourcejob has a whitelist of client identities that are allowed pull access.
The implementation of the
sink job requires that the connecting client identities be a valid ZFS filesystem name components.
How Replication Works¶
One of the major design goals of the replication module is to avoid any duplication of the nontrivial logic.
As such, the code works on abstract senders and receiver endpoints, where typically one will be implemented by a local program object and the other is an RPC client instance.
Regardless of push- or pull-style setup, the logic executes on the active side, i.e. in the
The following steps take place during replication and can be monitored using the
zrepl status subcommand:
- Plan the replication:
- Compare sender and receiver filesystem snapshots
- Build the replication plan
- Per filesystem, compute a diff between sender and receiver snapshots
- Build a list of replication steps
- If possible, use incremental sends (
zfs send -i)
- Otherwise, use full send of most recent snapshot on sender
- Give up on filesystems that cannot be replicated without data loss
- If possible, use incremental sends (
- Retry on errors that are likely temporary (i.e. network failures).
- Give up on filesystems where a permanent error was received over RPC.
- Execute the plan
- Perform replication steps in the following order: Among all filesystems with pending replication steps, pick the filesystem whose next replication step’s snapshot is the oldest.
- Create placeholder filesystems on the receiving side to mirror the dataset paths on the sender to
- After a successful replication step, update the replication cursor bookmark (see below).
The idea behind the execution order of replication steps is that if the sender snapshots all filesystems simultaneously at fixed intervals, the receiver will have all filesystems snapshotted at time
T1 before the first snapshot at
T2 = T1 + $interval is replicated.
The replication cursor bookmark
#zrepl_replication_cursor is kept per filesystem on the sending side of a replication setup:
It is a bookmark of the most recent successfully replicated snapshot to the receiving side.
It is is used by the not_replicated keep rule to identify all snapshots that have not yet been replicated to the receiving side.
Regardless of whether that keep rule is used, the bookmark ensures that replication can always continue incrementally.
Note that there is only one cursor bookmark per filesystem, which prohibits multiple jobs to replicate the same filesystem (see below).
Placeholder filesystems on the receiving side are regular ZFS filesystems with the placeholder property
Placeholders allow the receiving side to mirror the sender’s ZFS dataset hierachy without replicating every filesystem at every intermediary dataset path component.
Consider the following example:
S/H/J shall be replicated to
R/sink/job/S/H/J, but neither
S shall be replicated.
ZFS requires the existence of
R/sink/job/S/H in order to receive into
Thus, zrepl creates the parent filesystems as placeholders on the receiving side.
If at some point
S shall be replicated, the receiving side invalidates the placeholder flag automatically.
zrepl test placeholder command can be used to check whether a filesystem is a placeholder.
Currently, zrepl does not replicate filesystem properties. Whe receiving a filesystem, it is never mounted (-u flag) and mountpoint=none is set. This is temporary and being worked on issue #24.
Multiple Jobs & More than 2 Machines¶
When using multiple jobs across single or multiple machines, the following rules are critical to avoid race conditions & data loss:
- The sets of ZFS filesystems matched by the
filesystemsfilter fields must be disjoint across all jobs configured on a machine.
- The ZFS filesystem subtrees of jobs with
root_fsmust be disjoint.
- Across all zrepl instances on all machines in the replication domain, there must be a 1:1 correspondence between active and passive jobs.
Explanations & exceptions to above rules are detailed below.
If you would like to see improvements to multi-job setups, please open an issue on GitHub.
Jobs run independently of each other.
If two jobs match the same filesystem with their
filesystems filter, they will operate on that filesystem independently and potentially in parallel.
For example, if job A prunes snapshots that job B is planning to replicate, the replication will fail because B asssumed the snapshot to still be present.
More subtle race conditions can occur with the replication cursor bookmark, which currently only exists once per filesystem.
N push jobs to 1 sink¶
The sink job namespaces by client identity. It is thus safe to push to one sink job with different client identities. If the push jobs have the same client identity, the filesystems matched by the push jobs must be disjoint to avoid races.
N pull jobs from 1 source¶
Multiple pull jobs pulling from the same source have potential for race conditions during pruning: each pull job prunes the source side independently, causing replication-prune and prune-prune races.
There is currently no way for a pull job to filter which snapshots it should attempt to replicate. Thus, it is not possibe to just manually assert that the prune rules of all pull jobs are disjoint to avoid replication-prune and prune-prune races.