Back up datasets to non-ZFS volume

All your general support questions for OpenZFS on OS X.

Re: Back up datasets to non-ZFS volume

Postby tiennou7 » Sat Jul 28, 2018 1:16 pm

Brendon wrote:Server side. OSX server will host a TM backup on raw ZFS. No ZVOL needed.

Really ? I've tried that before and the server didn't want to use it as a TM destination, hence the ZVOL (which I was happy to find out, given there's a bunch of issues about it not working on the GitHub tracker).

I'm on Server too, on a not up-to-date 10.13. ZFS pool is a 2 by 2 mirror on an external Thunderbolt enclosure.

Edit: Just to be sure, I also didn't want to have sparseimages around, which seem to be the only way to get TM to use a raw ZFS volume.
Posts: 3
Joined: Fri Jun 12, 2015 3:54 am

Re: Back up datasets to non-ZFS volume

Postby tim.rohrer » Sat Jul 28, 2018 3:14 pm

I'm also on a server and haven't been able to get an HFS-formatted ZVOL to be picked up in Time Machine on that server.

I think it is worthy to continue this thread as we three seem to be experiencing different results. There must either: 1) be a communication issue between us; 2) be differences in configurations that are germane; or, 3) be bug(s) in the code that me might uncover.

I am running MacOS 10.13.6.

`sysctl {spl,zfs}.kext_version` outputs:

spl.kext_version: 1.7.2-1
zfs.kext_version: 1.7.2-1

I created my ZVOL on external HDDs mirrored into a vdev pool connected via USB3.1.

Code: Select all
sudo zfs create -V 100G tank1/NetworkTimeMachine

I then formatted the ZVOL:

Code: Select all
diskutil eraseDisk JHFSX TimeMachineTest disk18

It mounts, and is writeable, but doesn't show in the list of Available Disks for Time Machine; also, it is greyed out in the exclusion list meaning Time Machine won't consider it as a source.

I then took TimeMachineTest and shared it as a Time Machine backup destination and it does show up on a client! I have not fully tested it, but it appears that should work. I have not tried sharing a regular zfs dataset as suggested by @Brendon.

To return to my local backup problem, I have read two possible solutions that I'll try testing. One is to create a sparsebundle on the pool (or perhaps in the ZVOL image) and see if Time Machine will pick that up. The other is to use tmutil from the command line to designate the desired HFS-formated zvol as destination. I'm not sure which is a better approach.
Posts: 28
Joined: Tue Jul 24, 2018 6:49 pm

Re: Back up datasets to non-ZFS volume

Postby tiennou7 » Thu Aug 16, 2018 7:57 am

Looks like we have about the same setup (except I'm lagging behind in terms of macOS version) :

Code: Select all
$ sw_vers && sysctl {zfs,spl}
ProductName:   Mac OS X
ProductVersion:   10.13.4
BuildVersion:   17E199
zfs.kext_version: 1.7.2-1
spl.kext_version: 1.7.2-1

The pool was created using the standard stanza from, but used the paths from /var/run/disk/by-id instead of the /dev/ ones (the recent resilver is caused by one of them being grabbed as a …/by-path one instead, so I ended up offlining the offender to reattach it correctly).

Code: Select all
$ sudo zpool status
  pool: grenier
 state: ONLINE
  scan: resilvered 25,1G in 0h3m with 0 errors on Thu Aug 16 17:13:21 2018

   NAME                                            STATE     READ WRITE CKSUM
   grenier                                         ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-3F4FDADE-B4A2-0643-ABF8-BFCC792FBD46  ONLINE       0     0     0
       media-67AB8DE4-F4D8-B748-9227-5DF06307FEDA  ONLINE       0     0     0
     mirror-1                                      ONLINE       0     0     0
       media-35FCC733-188D-024F-BA7D-B7CE7BC13B31  ONLINE       0     0     0
       media-FA563FC6-9EE8-6A4B-A4FB-01AEACC93637  ONLINE       0     0     0

errors: No known data errors
$ zpool get all grenier
NAME     PROPERTY                       VALUE                          SOURCE
grenier  size                           3,62T                          -
grenier  capacity                       1%                             -
grenier  altroot                        -                              default
grenier  health                         ONLINE                         -
grenier  guid                           5725983681063018296            default
grenier  version                        -                              default
grenier  bootfs                         -                              default
grenier  delegation                     on                             default
grenier  autoreplace                    off                            default
grenier  cachefile                      -                              default
grenier  failmode                       wait                           default
grenier  listsnapshots                  off                            default
grenier  autoexpand                     off                            default
grenier  dedupditto                     0                              default
grenier  dedupratio                     1.00x                          -
grenier  free                           3,58T                          -
grenier  allocated                      49,9G                          -
grenier  readonly                       off                            -
grenier  ashift                         12                             local
grenier  comment                        -                              default
grenier  expandsize                     -                              -
grenier  freeing                        0                              default
grenier  fragmentation                  0%                             -
grenier  leaked                         0                              default
grenier  checkpoint                     -                              -
grenier  feature@async_destroy          enabled                        local
grenier  feature@empty_bpobj            active                         local
grenier  feature@lz4_compress           active                         local
grenier  feature@multi_vdev_crash_dump  enabled                        local
grenier  feature@spacemap_histogram     active                         local
grenier  feature@enabled_txg            active                         local
grenier  feature@hole_birth             active                         local
grenier  feature@extensible_dataset     enabled                        local
grenier  feature@embedded_data          active                         local
grenier  feature@bookmarks              enabled                        local
grenier  feature@filesystem_limits      enabled                        local
grenier  feature@large_blocks           enabled                        local
grenier  feature@sha512                 enabled                        local
grenier  feature@skein                  enabled                        local
grenier  feature@edonr                  enabled                        local
grenier  feature@encryption             enabled                        local
grenier  feature@device_removal         enabled                        local
grenier  feature@obsolete_counts        enabled                        local
grenier  feature@zpool_checkpoint       enabled                        local

Code: Select all
$ zfs list
grenier            1,02T  2,49T  1,41M  /Volumes/grenier
grenier/Backups    1,02T  3,46T  46,6G  -
grenier/shared      632K  2,49T   464K  /Volumes/grenier/shared

One thing I did differently is that I used Disk Utility to format my 1Tb ZVOL instead of doing it via the CLI. I didn't enable the mimic_hfs option on the main dataset, but I believe it's not relevant because ZVOL aren't datasets.

I also remember twiddling with the "enable any destination" default (as per, but disabled it afterward since I didn't plan on going through a network share locally :lol:)

Apart from that, I don't think I'm that far off "standard behavior".

Note that the sparseimage-creation you're referring to is usually done by Time Machine automatically, as (I believe) the only filesystem with support for directory hardlinks is HFS+ (and maybe APFS now that it exists, but my own TM backup is still HFS+ so I can't confirm). Ergo, you should be able to get a sparseimage when forcing the use of a "unsupported" volume (per the StackExchange answer).

I also made a few recursive snapshots of the whole datasets, and it didn't seem like the backup would reset. I still haven't tested behavior for clients (through the network share), so I'll try to do that now.
Posts: 3
Joined: Fri Jun 12, 2015 3:54 am

Re: Back up datasets to non-ZFS volume

Postby tim.rohrer » Fri Aug 24, 2018 9:30 am

Overall, I have much working, but not everything.

As a destination for two different types of backups, I created a raidz1 (4 x 3TB) pool called tank0:

Code: Select all
  pool: tank0
 state: ONLINE
  scan: none requested

   NAME                                            STATE     READ WRITE CKSUM
   tank0                                           ONLINE       0     0     0
     raidz1-0                                      ONLINE       0     0     0
       media-3EFB4851-35C8-AF45-B3BF-13C5D2A34168  ONLINE       0     0     0
       media-10EB9F8E-20AF-0749-993C-6A031CAB9433  ONLINE       0     0     0
       media-EFD8924C-7232-C545-A5D7-58FB6C591C21  ONLINE       0     0     0
       media-71EE3F0B-D4DC-6E4C-86D1-FBE8163C23E5  ONLINE       0     0     0

errors: No known data errors

On this, I created a `zvol` using: `sudo zfs create -V 3.5T tank0/ItsTechBackups`. I then formatted this using JHFSX and am successfully using that for the server's local Time Machine backups and providing a shared folder for network TMBs. This seems to work. Not sure if it is the best way, but I thought it was logical to keep all the Time Machine stuff together, although I'm not clear if I can do snapshots of a zvol?

Here is where I'm having a problem now with one network client and rsync. I created a datase with a quote of 2.5T:

Code: Select all
$ zfs get all tank0/ArchivesLocal
NAME                 PROPERTY               VALUE                         SOURCE
tank0/ArchivesLocal  type                   filesystem                    -
tank0/ArchivesLocal  creation               Sat Aug 11 20:37 2018         -
tank0/ArchivesLocal  used                   1.75T                         -
tank0/ArchivesLocal  available              764G                          -
tank0/ArchivesLocal  referenced             1.75T                         -
tank0/ArchivesLocal  compressratio          1.00x                         -
tank0/ArchivesLocal  mounted                yes                           -
tank0/ArchivesLocal  quota                  2.50T                         local
tank0/ArchivesLocal  reservation            none                          default
tank0/ArchivesLocal  recordsize             128K                          default
tank0/ArchivesLocal  mountpoint             /Volumes/tank0/ArchivesLocal  default
tank0/ArchivesLocal  sharenfs               off                           default
tank0/ArchivesLocal  checksum               on                            default
tank0/ArchivesLocal  compression            off                           default
tank0/ArchivesLocal  atime                  on                            default
tank0/ArchivesLocal  devices                on                            default
tank0/ArchivesLocal  exec                   on                            default
tank0/ArchivesLocal  setuid                 on                            default
tank0/ArchivesLocal  readonly               off                           default
tank0/ArchivesLocal  zoned                  off                           default
tank0/ArchivesLocal  snapdir                hidden                        default
tank0/ArchivesLocal  aclmode                passthrough                   default
tank0/ArchivesLocal  aclinherit             restricted                    default
tank0/ArchivesLocal  canmount               on                            default
tank0/ArchivesLocal  xattr                  on                            default
tank0/ArchivesLocal  copies                 1                             default
tank0/ArchivesLocal  version                5                             -
tank0/ArchivesLocal  utf8only               off                           -
tank0/ArchivesLocal  normalization          none                          -
tank0/ArchivesLocal  casesensitivity        sensitive                     -
tank0/ArchivesLocal  vscan                  off                           default
tank0/ArchivesLocal  nbmand                 off                           default
tank0/ArchivesLocal  sharesmb               off                           default
tank0/ArchivesLocal  refquota               none                          default
tank0/ArchivesLocal  refreservation         none                          default
tank0/ArchivesLocal  primarycache           all                           default
tank0/ArchivesLocal  secondarycache         all                           default
tank0/ArchivesLocal  usedbysnapshots        0                             -
tank0/ArchivesLocal  usedbydataset          1.75T                         -
tank0/ArchivesLocal  usedbychildren         0                             -
tank0/ArchivesLocal  usedbyrefreservation   0                             -
tank0/ArchivesLocal  logbias                latency                       default
tank0/ArchivesLocal  dedup                  off                           default
tank0/ArchivesLocal  mlslabel               none                          default
tank0/ArchivesLocal  sync                   standard                      default
tank0/ArchivesLocal  refcompressratio       1.00x                         -
tank0/ArchivesLocal  written                1.75T                         -
tank0/ArchivesLocal  logicalused            1.75T                         -
tank0/ArchivesLocal  logicalreferenced      1.75T                         -
tank0/ArchivesLocal  filesystem_limit       none                          default
tank0/ArchivesLocal  snapshot_limit         none                          default
tank0/ArchivesLocal  filesystem_count       none                          default
tank0/ArchivesLocal  snapshot_count         none                          default
tank0/ArchivesLocal  snapdev                hidden                        default
tank0/ArchivesLocal       on                            default
tank0/ArchivesLocal  off                           default
tank0/ArchivesLocal    off                           default
tank0/ArchivesLocal  shareafp               off                           default
tank0/ArchivesLocal  redundant_metadata     all                           default
tank0/ArchivesLocal  overlay                off                           default
tank0/ArchivesLocal  encryption             off                           default
tank0/ArchivesLocal  keylocation            none                          default
tank0/ArchivesLocal  keyformat              none                          default
tank0/ArchivesLocal  pbkdf2iters            0                             default

Using the same bash scripts as I've used for years from each client, I rsync copies of /Users to the server. However, one of the clients won't work, and I can't yet figure out why. It is like the rsync process just stops, and it isn't at the same place (I'm looking for how to log activities on the server.) For at least one instance, it appears the server-side rsync processes finished/quit..

Or, Perhaps I'm not understanding something about how quotas vs zvol size preallocates the available space?

Code: Select all
tank0  10.9T  3.47T  7.40T        -         -     3%    31%  1.00x  ONLINE  -

$ zfs list tank0/ArchivesLocal
tank0/ArchivesLocal  1.75T   764G  1.75T  /Volumes/tank0/ArchivesLocal

But, it appears to me I should have sufficient space left; I only need about 120G for this client's Users folder.

Any ideas are welcome. I'm trying to gather a little more data before I write a new post asking for help.
Posts: 28
Joined: Tue Jul 24, 2018 6:49 pm

Re: Back up datasets to non-ZFS volume

Postby Jimbo » Sun Aug 26, 2018 1:59 am

tim.rohrer wrote:although I'm not clear if I can do snapshots of a zvol?

Yep, not a problem. I’ve been snapshotting a TimeMachine ZVOL for years (and then sending that to a backup pool; yes backup of backups).
Posts: 13
Joined: Sun Sep 17, 2017 5:12 am

Re: Back up datasets to non-ZFS volume

Postby tim.rohrer » Sun Sep 02, 2018 5:47 pm

Thanks @jimbo.

One thing on my research list, which it sounds like you might be able to answer...

Since a snapshot only (as I understand it) makes copies of what changed, does `zfs send` include a complete copy of the data first?

Posts: 28
Joined: Tue Jul 24, 2018 6:49 pm

Re: Back up datasets to non-ZFS volume

Postby Jimbo » Mon Sep 03, 2018 2:00 am

A zfs send of a snapshot will send all blocks that make up that snapshot.

What you are probably thinking of is an incremental send where you send the changed blocks between those snapshots.

So, very loosely, the process is:

    Take initial snapshot.
    Send initial snapshot (snapshot 1).
    Do work.
    Take new snapshot (snapshot 2).
    Incremental send of snapshot 1 and 2.
    Remove snapshot 1 (optional).
    Do more work.
    Take new snapshot (snapshot 3).
    Incremental send of snapshot 2 and 3.
    Remove snapshot 2 (optional).
    Rinse, repeat.
Main thing is not to remove the latest snapshot on source or destination or your going to lose your incremental send capability. I wrote some fairly simple scripts to do this for me. There are some fairly feature complete ones out there if you want to exercise some Google-foo.

Posts: 13
Joined: Sun Sep 17, 2017 5:12 am

Re: Back up datasets to non-ZFS volume

Postby tim.rohrer » Mon Sep 03, 2018 7:47 am

My confusion was even more basic than incremental sends of snapshots, but I think I figured it out.

I was not understanding how I would get a complete copy of a pool to a backup disk using `zfs send` because the first level of the zfs documentation refers to snapshots:

Code: Select all
send [-DnPpRvLecr] [-[iI] snapshot] <snapshot>

However, when I looked more closely at the second level of documentation (from `zfs send <enter>`:

Code: Select all
zfs send
missing snapshot argument
   send [-DnPpRvLecr] [-[iI] snapshot] <snapshot>
   send [-Lecr] [-i snapshot|bookmark] <filesystem|volume|snapshot>
   send [-nvPe] -t <receive_resume_token>

This makes it look like I can `zfs send pool` as the initial copy, then switch to the incremental.

I also read something that seems to indicate I can send the first snapshot and that will copy all the data over.

Obviously, I need to do more testing, but first I have to get this issue with the rsync coming over from a non-zfs computer. Please see my other post if you're interested...

Cheers, Tim
Posts: 28
Joined: Tue Jul 24, 2018 6:49 pm

Re: Back up datasets to non-ZFS volume

Postby lundman » Mon Sep 03, 2018 4:34 pm

You would generally do

zfs snapshot dataset@first_snapshot
zfs send dataset@first_snapshot $OUTPUT

Where $OUTPUT can be "> filename.dump" or "|ssh server command ..." or "|zfs recv ..." etc.

zfs snapshot dataset@second_snapshot
zfs send -i dataset@first_snapshot dataset@second_snapshot $OUTPUT

So, if your dataset is the POOL, and you use timestamps as snapshot names:

zfs snapshot pool@20180903
zfs send pool@20180903 > full_backup

zfs snapshot pool@20180904
zfs send -i pool@20180903 pool@20180904 > incremental_backup
User avatar
Posts: 564
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Back up datasets to non-ZFS volume

Postby tim.rohrer » Tue Sep 04, 2018 12:17 pm

Very helpful. Thank you.
Posts: 28
Joined: Tue Jul 24, 2018 6:49 pm


Return to General Help

Who is online

Users browsing this forum: No registered users and 1 guest