Note that the download is not an archive, but a single 1000+ line shellscript. Just chmod +x zrep, and you are ready to go!
Zrep has been reported to run on multiple OS's with ZFS, including Solaris,
IllumOS, Linux, and BSD (including FreeNAS's BSD).
Compatibility issues Just be sure to run it with Real ksh, not an impostor such as pdksh, or it may not work properly. Similarly, there may be a bug with Gentoo "improved" ksh, which is a non-official patch. I have had a report that standard 2012 ksh works, but the app-shells/ksh-93.20140625 gentoo ksh may have a problem
The license for zrep is available here.
The short summary is that you are free to use it as much as you like,as
long as you dont sue me for anything that goes wrong :-)
If you are really bored, you may also read the CHANGELOG For historians, some older versions are still available: zrep version 0.8.4 , Oct 17th, 2012 / zrep version 0.7 , June 29th, 2012
It also handles 'failover', as simply as "zrep failover datapool/yourfs". This will conveniently handle all the details of
# zrep status scratch/datasrc synced as of Mon Mar 12 13:23 2012
In contrast, zrep is designed to be
A super-trivialized version of how to use zrep, would be:
After the initial full sync, this will do incremental zfs sends, back to back, "forever". (or at least until you hit an error :)zrep init pool/fs desthost destpool/fs # (will create the destination fs!) # Initialize additional fs's if you wish. Then.. while true; do zrep sync all; done
For some amount of greater detail, please see the usage message, via "zrep -h"
The one "undocumented feature" you may care about, is that the property zrep:savecount controls the number of recent snapshots preserved. To change from the default (currently, 5), use
zfs set zrep:savecount=NEWVAL your/fs/here
There is also a separate troubleshooting page
/pool/fs/hereJust don't try to use it on BOTH of
Some speed results, from local-host testing: using regular scp to regular sshd, got about 20MB/sec using regular scp to hpn-sshd, got about 30MB/sec using hpc-scp to hpn-sshd, got 150MB/sec
Or, alternatively, you might want to just use rsh, if you blindly trust the security of your network from sniffing, and you put some kind of firewall or tcp wrappers around the listening demons (You can do this by setting SSH=rsh in your environment). It should be noted, however, that a zfs send has speed limits of its own, so you may want to first time "zfs send your@snapshot >/dev/null", to see if your gains are going to be significant. Unless you're sending from an SSD, it is probably simplest to just stick with SSH.