Back up datasets to non-ZFS volume

All your general support questions for OpenZFS on OS X.

Re: Back up datasets to non-ZFS volume

Postby tiennou7 » Sat Jul 28, 2018 1:16 pm

Brendon wrote:Server side. OSX server will host a TM backup on raw ZFS. No ZVOL needed.

Really ? I've tried that before and the server didn't want to use it as a TM destination, hence the ZVOL (which I was happy to find out, given there's a bunch of issues about it not working on the GitHub tracker).

I'm on Server too, on a not up-to-date 10.13. ZFS pool is a 2 by 2 mirror on an external Thunderbolt enclosure.

Edit: Just to be sure, I also didn't want to have sparseimages around, which seem to be the only way to get TM to use a raw ZFS volume.
Posts: 3
Joined: Fri Jun 12, 2015 3:54 am

Re: Back up datasets to non-ZFS volume

Postby tim.rohrer » Sat Jul 28, 2018 3:14 pm

I'm also on a server and haven't been able to get an HFS-formatted ZVOL to be picked up in Time Machine on that server.

I think it is worthy to continue this thread as we three seem to be experiencing different results. There must either: 1) be a communication issue between us; 2) be differences in configurations that are germane; or, 3) be bug(s) in the code that me might uncover.

I am running MacOS 10.13.6.

`sysctl {spl,zfs}.kext_version` outputs:

spl.kext_version: 1.7.2-1
zfs.kext_version: 1.7.2-1

I created my ZVOL on external HDDs mirrored into a vdev pool connected via USB3.1.

Code: Select all
sudo zfs create -V 100G tank1/NetworkTimeMachine

I then formatted the ZVOL:

Code: Select all
diskutil eraseDisk JHFSX TimeMachineTest disk18

It mounts, and is writeable, but doesn't show in the list of Available Disks for Time Machine; also, it is greyed out in the exclusion list meaning Time Machine won't consider it as a source.

I then took TimeMachineTest and shared it as a Time Machine backup destination and it does show up on a client! I have not fully tested it, but it appears that should work. I have not tried sharing a regular zfs dataset as suggested by @Brendon.

To return to my local backup problem, I have read two possible solutions that I'll try testing. One is to create a sparsebundle on the pool (or perhaps in the ZVOL image) and see if Time Machine will pick that up. The other is to use tmutil from the command line to designate the desired HFS-formated zvol as destination. I'm not sure which is a better approach.
Posts: 6
Joined: Tue Jul 24, 2018 6:49 pm

Re: Back up datasets to non-ZFS volume

Postby tiennou7 » Thu Aug 16, 2018 7:57 am

Looks like we have about the same setup (except I'm lagging behind in terms of macOS version) :

Code: Select all
$ sw_vers && sysctl {zfs,spl}
ProductName:   Mac OS X
ProductVersion:   10.13.4
BuildVersion:   17E199
zfs.kext_version: 1.7.2-1
spl.kext_version: 1.7.2-1

The pool was created using the standard stanza from, but used the paths from /var/run/disk/by-id instead of the /dev/ ones (the recent resilver is caused by one of them being grabbed as a …/by-path one instead, so I ended up offlining the offender to reattach it correctly).

Code: Select all
$ sudo zpool status
  pool: grenier
 state: ONLINE
  scan: resilvered 25,1G in 0h3m with 0 errors on Thu Aug 16 17:13:21 2018

   NAME                                            STATE     READ WRITE CKSUM
   grenier                                         ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-3F4FDADE-B4A2-0643-ABF8-BFCC792FBD46  ONLINE       0     0     0
       media-67AB8DE4-F4D8-B748-9227-5DF06307FEDA  ONLINE       0     0     0
     mirror-1                                      ONLINE       0     0     0
       media-35FCC733-188D-024F-BA7D-B7CE7BC13B31  ONLINE       0     0     0
       media-FA563FC6-9EE8-6A4B-A4FB-01AEACC93637  ONLINE       0     0     0

errors: No known data errors
$ zpool get all grenier
NAME     PROPERTY                       VALUE                          SOURCE
grenier  size                           3,62T                          -
grenier  capacity                       1%                             -
grenier  altroot                        -                              default
grenier  health                         ONLINE                         -
grenier  guid                           5725983681063018296            default
grenier  version                        -                              default
grenier  bootfs                         -                              default
grenier  delegation                     on                             default
grenier  autoreplace                    off                            default
grenier  cachefile                      -                              default
grenier  failmode                       wait                           default
grenier  listsnapshots                  off                            default
grenier  autoexpand                     off                            default
grenier  dedupditto                     0                              default
grenier  dedupratio                     1.00x                          -
grenier  free                           3,58T                          -
grenier  allocated                      49,9G                          -
grenier  readonly                       off                            -
grenier  ashift                         12                             local
grenier  comment                        -                              default
grenier  expandsize                     -                              -
grenier  freeing                        0                              default
grenier  fragmentation                  0%                             -
grenier  leaked                         0                              default
grenier  checkpoint                     -                              -
grenier  feature@async_destroy          enabled                        local
grenier  feature@empty_bpobj            active                         local
grenier  feature@lz4_compress           active                         local
grenier  feature@multi_vdev_crash_dump  enabled                        local
grenier  feature@spacemap_histogram     active                         local
grenier  feature@enabled_txg            active                         local
grenier  feature@hole_birth             active                         local
grenier  feature@extensible_dataset     enabled                        local
grenier  feature@embedded_data          active                         local
grenier  feature@bookmarks              enabled                        local
grenier  feature@filesystem_limits      enabled                        local
grenier  feature@large_blocks           enabled                        local
grenier  feature@sha512                 enabled                        local
grenier  feature@skein                  enabled                        local
grenier  feature@edonr                  enabled                        local
grenier  feature@encryption             enabled                        local
grenier  feature@device_removal         enabled                        local
grenier  feature@obsolete_counts        enabled                        local
grenier  feature@zpool_checkpoint       enabled                        local

Code: Select all
$ zfs list
grenier            1,02T  2,49T  1,41M  /Volumes/grenier
grenier/Backups    1,02T  3,46T  46,6G  -
grenier/shared      632K  2,49T   464K  /Volumes/grenier/shared

One thing I did differently is that I used Disk Utility to format my 1Tb ZVOL instead of doing it via the CLI. I didn't enable the mimic_hfs option on the main dataset, but I believe it's not relevant because ZVOL aren't datasets.

I also remember twiddling with the "enable any destination" default (as per, but disabled it afterward since I didn't plan on going through a network share locally :lol:)

Apart from that, I don't think I'm that far off "standard behavior".

Note that the sparseimage-creation you're referring to is usually done by Time Machine automatically, as (I believe) the only filesystem with support for directory hardlinks is HFS+ (and maybe APFS now that it exists, but my own TM backup is still HFS+ so I can't confirm). Ergo, you should be able to get a sparseimage when forcing the use of a "unsupported" volume (per the StackExchange answer).

I also made a few recursive snapshots of the whole datasets, and it didn't seem like the backup would reset. I still haven't tested behavior for clients (through the network share), so I'll try to do that now.
Posts: 3
Joined: Fri Jun 12, 2015 3:54 am


Return to General Help

Who is online

Users browsing this forum: No registered users and 1 guest