ZFS Failover Across Multiple Machines?

All your general support questions for OpenZFS on OS X.

ZFS Failover Across Multiple Machines?

Postby Haravikk » Wed Oct 26, 2016 12:49 am

So I'm wondering, is it possible to configure ZFS with failover across multiple machines on its own, or does this require dedicated shared storage software?

I know it can be done in theory for a single user, since you can setup each machine with is own ZFS array and make it visible as a massive iSCSI target, then the user just mirrors these with ZFS on their end (ideally with a nice big cache), so if one goes down there is no interruption, just like mirroring of disks. However the cost to reconstruct something like this would be enormous, even if the machine was only unavailable temporarily.

Otherwise with multiple machines it's possible to just connect to one, and have it frequently send updates to the other, but this can only keep the systems in sync up to a point and provides no failover, as switching to the slave machine would mean losing any data since the last send.


Is failover something that ZFS is capable of doing on its own, or does it require specialist software? If so, any open source recommendations?
Haravikk
 
Posts: 82
Joined: Tue Mar 17, 2015 4:52 am

Re: ZFS Failover Across Multiple Machines?

Postby Sharko » Sun Oct 30, 2016 9:01 am

Wow, that is quite a question! If no one responds substantially here you might try contacting Allan Jude through the BSDNow podcast (feedback@bsdnow.tv). He and Michael Lucas wrote a couple of books about ZFS on FreeBSD (and if ZFS has this capability it will undoubtably be in book 2, "Advanced Topics;" I have book 1, and failover is not discussed there). You might be wise to keep your question generic and not bring up OS X; their podcast is more about the true BSD's like FreeBSD, OpenBSD et al. Since the ZFS implementation is OpenZFS in both FreeBSD and openzfsonosx I would assume that whatever suggestion he comes up with is going to apply to both, at least as far as the ZFS part of it.

That being said, my suspicion is that failover is going to require software outside of OpenZFS proper. Hopefully you would be able to get that software to run on OS X, either through brew or macports. And if you ever get it working, post an update here!

Kurt
Sharko
 
Posts: 230
Joined: Thu May 12, 2016 12:19 pm

Re: ZFS Failover Across Multiple Machines?

Postby Sharko » Sun Oct 30, 2016 9:25 am

Also, you might want to look at the post by s_mcleod over in the "Off the Wall" section of this forum. If anybody here knows about failover, I'm guessing it is Lundman or him... the "Welcoming... myself" post has some serious server porn there.
Sharko
 
Posts: 230
Joined: Thu May 12, 2016 12:19 pm

Re: ZFS Failover Across Multiple Machines?

Postby tangles » Tue Nov 01, 2016 11:44 pm

Too easy mate...

Just run GlusterFS on top of ZFS and share it out with SAMBA CTDB enabled.

:ugeek: :ugeek: :ugeek:

Here's the GlusterFS on ZFS bit:
https://gluster.readthedocs.io/en/lates ... 0On%20ZFS/

and here's the SAMBA with CTDB bit:
https://access.redhat.com/documentation ... 09s04.html

Doesn't it suck when cheap never ever = easy... :roll:


and then I found this for ya:

https://github.com/ewwhite/zfs-ha/wiki

8-)


(there's no strikethru option so had to colour the text to grey instead)
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: ZFS Failover Across Multiple Machines?

Postby tangles » Wed Nov 02, 2016 12:41 am

Just thought of this crazy way...

Run two (or more) servers of OwnCloud/NextCloud, and use multiple zvols on each to create a mirrored or raidz pool on your main client.

Then either share out the pool to the rest of your clients using OwnCloud's client, or use something like unison perhaps...

Not the most elegant/perfect solution as you have a single point of failure, but clearly is the easiest/cheapest...
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am


Return to General Help

Who is online

Users browsing this forum: No registered users and 15 guests