Hi, I have two cMP 5.1 2010 with main disk SSD, 16GBs or RAM each and a couple of Areca SAS RAID cards (but can configure them as pure JBOD passthrough) connecting a couple of SAS 6G Stardom 8 bay JBOD enclosures. Both machines run Mac OS High Sierra with all the updates.
One machine has been the main server where 15 users connect via SMB to copy data (they don't work off the server via network) from modern Macs running a variety of Catalina, BigSur and Monterey. After some files that have reside in that server for years have appeared corrupted I have decided to go ZFS for this server, while the other is still configured with RAID6 with JHFS+ acting as a backup server where the main server and users make automatic backups with Carbon Copy Cloner.
On both machines the only things installed are the operating system, Carbon Copy Cloner (CCC), the Areca PCIe cards with their drives and nothing else. They will be used only to share files and no other software will be running besides CCC and ZFS.
My idea is to configure the main server with ZFS, share three datasets over SMB and backup those via CCC to the backup server with the RAID6 and JHFS+ volumes until I have a stable configuration in the main server so I can go all ZFS on both and backup vía SEND/ssh RECEIVE to the backup server. One of those will have BackBlaze software to make a second backup to their backup service offsite, but I still don't have decided which will make the BackBlaze backup to the cloud (it will depend if there are any incompatibilities with the ZFS implementation).
I have installed OpenZFS on OS X 2.1.6 and created the ZPOOL in the main machine with three datasets. I used the main options that appear in this site to create de ZPOOL (sudo zpool create -f -o ashift=12 -O compression=lz4 -O casesensitivity=insensitive -O normalization=formD bigdata raidz1 disk1 disk2 disk3 disk4) but when I receive a batch of 8 new disks for that enclosure I will create a raidz2 and move all the info to the new zpool.
I have this running, I have selected the three datasets in the Finder and changed privileges so everyone can read and write for the time being (although this small company have it shared that way so any employee enters as guest… they had problems with permissions on files and that was their solution). I will change that in the future but right now I don't want to mess with permissions. I am copying 14TBs of files to this new zpool and everything seems to go fine, divided in those three datasets that I created so I can snapshot each one individually.
… and I have tones of questions, but lets start on your suggestions: what would you suggest for this couple of servers? anything different to what I have chosen as options?
All the datasets are shared via the standard sharing dialog in the Sharing preference pane, only SMB and with guest access activated. My datasets are pure ZFS volumes in the Finder with no APFS or HFS emulation activated and files seem to be correct (mainly PDFs, Photoshop, Illustrator, InDesign, Word, Excel, PowerPoint, zip files and the like) when copied and as there won't be any Photos library or iTunes/Music libraries and haven't bothered to activate HFS+/APFS emulation. Can that be a problem?
Lets start with this and I will share my other questions. Thanks in advance to everybody for all the information here in the forums and for this project. I wish Apple would support ZFS but what has been achieved by Lundman and the rest of past and present contributors is incredible. Thank you.