Two Time Machine backups at the same causing errors

All your general support questions for OpenZFS on OS X.

Two Time Machine backups at the same causing errors

Postby a_wein » Sat Nov 04, 2017 10:13 am

Hi,

I'm not sure at all if this is ZFS related or not but it's kind of my best guess right now. I hope someone else out there is using a similar setup and can tell me if this is just me or a general issue.

The issue:
One backup will fail as soon as two machines will run a Time Machine backup to my O3X storage server at the same time. Looks like the machine that comes first is able to do the backup. The second one will fail and the error depends on which machine fails.

One machine will get:
Code: Select all
Failed to attach to image '/Volumes/TM-<Name1>/<Name1>.sparsebundle', DIHLDiskImageAttach returned: 110 (image not recognized)


The other one tends to get that one:

Failed to mount because the image could not be attached, error: Error Domain=com.apple.backupd.ErrorDomain Code=21
[/code]

The backup of the other machine tends to get corrupted when the other machine tries to do a backup as well.
I tried adding quotas via the Server.app as well as on the file systems. There is more than enough free space available.

My current "solution" is to do frequent snapshots and revert back to a known good state in case a backup gets corrupted. (This kind of works since there is usually just one machine around and not

My setup:
macOS 10.12 / 10.13
O3X 1.6.1 / 1.7.0

The pool layout is a stripe of two mirrors with two disks each
file systems:
zfspool/tmbackups
zfspool/tmbackups/tm_<name1>
zfspool/tmbackups/tm_<name2>



Any help / hint is appreciated! Thanks!
a_wein
 
Posts: 1
Joined: Sat Mar 15, 2014 3:01 am

Re: Two Time Machine backups at the same causing errors

Postby MichaelNewbery » Wed Nov 08, 2017 1:08 am

I appear to have a similar problem.

10.12.6, with Server 5.3.1. SPL: 1.6.1-1, ZFS:1.6.1-1
Each machine that is being backed up is on its own filesystem
/Wells/machine1
/Wells/machine2
/Wells/machine3
/Wells/Mailstore
There are hourly snapshots of each file system
Mailstore has an rsync task that runs every hour to sync the mail store of another server, running a mail server. After the rsync runs, it snapshots /Wells/Mailstore
The others are snapshotted every hour by a cron job (launchd, actually).
Time Machine exhibits the same symptoms. The machine that runs first seems to work OK, after which the others tend to fail. There is never a problem with /Wells/Mailstore though
User avatar
MichaelNewbery
 
Posts: 10
Joined: Sat May 31, 2014 7:38 pm
Location: New Zealand

Re: Two Time Machine backups at the same causing errors

Postby Jimbo » Wed Nov 08, 2017 1:54 am

I’ve had Time Machine backups going to ZFS for a few years with no issues, however, I use a ZVOL from my pool that is HFS formatted and then presented to clients. I set this up way back before a lot of the HFS compatibility options were added and have never revisited to see if I could make it work with a normal ZFS dataset.

Using a ZVOL is somewhat more limiting, but does work reliably. Can’t remember what version I set it up with but still working with High Sierra and 1.7.0.

James
Jimbo
 
Posts: 149
Joined: Sun Sep 17, 2017 5:12 am

Re: Two Time Machine backups at the same causing errors

Postby MichaelNewbery » Thu Nov 09, 2017 10:46 am

I would much prefer to use a standard zfs volume, however...

What parameters did you use to set up your ZVOL?

If I use ZVOLs, I would like to continue the current model, where each user gets their own filesystem (or ZVOL in this case) with daily snapshots, so that when TimeMachine glitches, I can roll back just one user rather than having to roll back everyone. Is this likely to be a problem.

Also, is there any easy way (or even any way at all) to move my zfs snapshot hierarchy over to a ZVOL?
User avatar
MichaelNewbery
 
Posts: 10
Joined: Sat May 31, 2014 7:38 pm
Location: New Zealand

Re: Two Time Machine backups at the same causing errors

Postby Jimbo » Thu Nov 09, 2017 9:13 pm

Parameters for the ZVOL... so long ago, so far away. Sorry, I can't remember, but it wasn't anything special. The pool has LZ4 enabled, that is the only thing of note.

If you want to stick with a snapshot level "restore" of the TM volumes, you'll be wanting multiple ZVOLs as snapshot will apply to all contents. That said, you could just "restore", i.e. copy out, just the sparse bundle you want to restore from the snapshot (but this is no longer metadata magic).

I think you're out of luck trying to move your snapshots to a ZVOL - the ZVOL contents will be, by virtue of being formatted HFS+, something rather foreign to ZFS (ZFS is just presenting a block device to the OS which then has to deal with whatever the block device is).

So, I don't think any of this helps you much...

James
Jimbo
 
Posts: 149
Joined: Sun Sep 17, 2017 5:12 am


Return to General Help

Who is online

Users browsing this forum: No registered users and 25 guests