by macz » Mon Mar 05, 2018 8:33 am
My ‘old’ server which started as OS X snow leopard with apples original pre release zfs 10a284 code was an istar server 4u chassis with 9 hot swap slots.. a e8400 on a gigabyte board with 8gb ram.. that box is still running great.. although with OS X el cap and osxzfs 1.4.5 I think. In many ways the old snow leopard code was better.. tighter integration with finder and it worked great as backing for OS X server.. but it was explains to me that in 10.7 or 10.8 apple rewrote significant enough portions of the file system that jacked everything up.
Currently my esxi server in in a 3u rackable systems chassis with 12 hot swap slots... with a rackable systems 16 drive shelf attached vi 8088.. that is currently set up with 15 drives in a 3x5raidz1 as a backup pool.. omnios is pretty performant and that pool scrubs at over 900MB/s and that is with old, slow hitachi 2TB drives that avg only 100MBs on avg.
ZFS is a tool.. its not an end all be all. And its not for every use case. And all the talk of pools have to be stripped pairs vs raidz bla bla.. its a toolbox.. and every use case requires something different.
In my case the data doesnt change radically and is replaceable so my bulk online storage is just a stripe.. thats right.. no redundancy.. its checksums so I know if there are issues, but wont self repair.. but I can add drives 1, 2, 5 at a time whatever.. and its FAST. It gets backed up to the stripped raidz1 shelf .. and that obviously has redundancy and self healing.. so for me .. I have little overhead on my online pool that is up 24/7.. get better storage density at lower cost.. and when I get to 5 drives or so.. the pool will get updated with higher capacity drives and the older 5 drives will become another raidz1 stripe in the backup pool...
Look at zfs as just another tool in the bag and forget all the hype.. and very few use cases require log devices and other nonsense ...