mac osx server build 500TB and bigger

All your general support questions for OpenZFS on OS X.

mac osx server build 500TB and bigger

Postby reco » Mon Jul 27, 2015 10:27 am

hi guys,

i am looking into moving my zfs server installations from omnios supermicroboxes to a mac osx server.
we decided to do this because netatalk/afp is discontinued and we need a reliable macosx NAS based on ZFS.

because the current mac Pros have only thunderbold i am wondering how these thunderbod to SASS adapter might work
RocketStor 6328L
http://www.highpoint-tech.com/USA_new/s ... erview.htm

i want to connect 4 x supermicro jbods SC847E26-RJBOD1
http://www.supermicro.com/products/chas ... RJBOD1.cfm

any comments, suggestions or experience?
christof
reco
 
Posts: 4
Joined: Sat Sep 06, 2014 10:28 am

Re: mac osx server build 500TB and bigger

Postby reco » Thu Jul 30, 2015 4:29 pm

is anybody using openzfs in production for 200TB+ data?
reco
 
Posts: 4
Joined: Sat Sep 06, 2014 10:28 am

Re: mac osx server build 500TB and bigger

Postby tangles » Mon Aug 03, 2015 6:18 pm

Hi Reco,

How many client connections?

I don't think you'll find anyone in the world that will come close to that amount of storage solely managed/attached to OSX, mainly because HFS is one of the worst FileSystems still used today. At least I don't have to tell you about that given you're in this forum ;)
I like OSX, but I wouldn't trust it with that much data attached, even with ZFS.

I have OSX 10.10.3 Server 24GB RAM at home with ZFS (1.3.1-r2 with arc limited to 16GB) and 6 x 4TB drives mirrored connected via a RR2744 SAS HBA
Every 90 days or so I need to reboot the server because the network stack is completely unpingable...

I use Apple's SMB exclusively now. (I used to use NFS) you sure you can't omit AFP from your environment?
ServerSideCopy isn't implemented yet, but it sure will be nice once it is...

When Mac customers want/need more storage > 50TB, I tend to deploy boxes from GBLabs to keep the support simple.
Admittedly, the customers I deal with needing this much storage are mostly media based – i.e. they have bugger all understanding about enterprise storage and so simple = best here.

In my man-cave, I have an old/ancient system that I play with for ZFS.
1 x 2009 Xserve 32GB RAM (1.3.1-r2 with arc limited to 16GB)
1 x Brocade 300e FC switch
5 x Xserve xRaids (only three are full of disks and connected to the FC switch)

So that's 42 disks connected (as JBOD) to the Xserve for ZFS to manage.
I use ZFS's snd/rcv to move data from the pool of 4TB disks to this old clunker using cheap nasty Chelsio 10GBe cards (point to point)
After transferring ~ 8TB to the Xserve, I'm unable to export the pool...
I thought it was flushing caches/ARC but I've left it for a good few hours and the pool is unresponsive.
Issuing a reboot hangs too because of ZFS doing "something" so I resort to hard rebooting the Xserve and then all is fine after that.
Massive PITA to diagnose because I don't want to have to move 8TB over the network again just to repeat the conditions to diagnose...

So like I said... I wouldn't trust OSX with that much storage... not yet anyway.
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: mac osx server build 500TB and bigger

Postby JasonBelec » Mon Oct 26, 2015 1:09 pm

I really don't have an issue with large pools under OS X. Here on site or off site at clients. However mileage may vary depending on setup.
JasonBelec
 
Posts: 32
Joined: Mon Oct 26, 2015 1:07 pm

Re: mac osx server build 500TB and bigger

Postby lundman » Mon Oct 26, 2015 3:59 pm

ZFS (1.3.1-r2) has the unmount ARC block bug, followed by unlinked_drain on next import. Both issues have been fixed :)
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan


Return to General Help

Who is online

Users browsing this forum: No registered users and 22 guests