erratic performance with new drives

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

Re: erratic performance with new drives

Postby lundman » Sun Mar 04, 2018 6:09 am

Took me a while to even find ZFS in there - that sure looks like a lazy sunday afternoon.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: erratic performance with new drives

Postby e8vww » Sun Mar 04, 2018 7:49 am

lundman wrote:Took me a while to even find ZFS in there - that sure looks like a lazy sunday afternoon.


Yeah the machine doesn't have much to do which makes me wonder why zfs is so slow. Here is another one when the performance was better (only 1 pool in use instead of 2):

Image

Does this help? Let me know if you want a spindump/how to send it to you.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby Brendon » Sun Mar 04, 2018 8:41 am

If you dont mind me asking how many CPUs in that machine?

Cheers
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: erratic performance with new drives

Postby e8vww » Sun Mar 04, 2018 9:01 am

Brendon wrote:If you dont mind me asking how many CPUs in that machine?

Cheers

2 x 2.8 GHz Quad-Core Intel Xeon
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby e8vww » Sun Mar 04, 2018 9:45 am

macz wrote:some have run the sun/solaris storage appliance simulator VM which is full featured and without a time limit.. some have even ported it to bare metal and its performant.
overall.. the sun storage appliance is by far the best gui front end and very usable for a filer box .. even emulated on esxi or virtualbox.. you can even pass through controllers...


Good to know. The only documentation I saw said that it was for development purposes. What is the advantage of passing through the controller as opposed to just the drives themselves as per some of the tutorials? How is the performance vs openzfs? I can't find any benchmark comparisons. The iscsi/smb part would become a bottleneck wouldn't it, since all data is being passed out of the simulator, onto the network and then back onto the same machine? Thanks for your helpful advice!
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby macz » Sun Mar 04, 2018 6:30 pm

well if you are talking about the sun/solaris storage appliance VM .. here is what I know.. and I am by no means an expert.. but have played with it and read several blogs of folks that ran it with live data.


ok.. so in the end.. what you download from sun/solaris is indeed inteded to experiment with and 'test drive'. The VM file comes with like a dozen 'virtulal' disks of like 5gb to 'play with' to make pools of varoius designs.

obvously you would not want to run this with live data this way... so folks have figured out that you can pass the VM actual disks...

its up to you but based on whatever you are using to run the VM, busybox, Esxi, etc.. determines the ease of passing these resources and the performance hit of the storage path.

for ESXI it is far more performant to pass the VM an entire controller at the hardware level.. but I have run Napp-it/omnios in esxi using software defined disks (RDM passthrough) and it works.. albiet probably just a bit less stable and a bit less performant. I pass napp-it a LSI 4e4i controller for an external drive shelf and 2 intel SSDs for esxi to use as ZFS backed storage for primary VM storage via NFS. That works great. And I pass napp-it/omnios 3 8tb drives on the sata controller via RDM for media and other bulk data were speed and iop are less critical. I need 1 sata for the SSD that napp-it/omnios lives on in a straigh VMFS datastore that esxi uses during boot.

The box boots esxi off USB internal header
esxi loads napp-it/omnios off sata 0 SSD -- this brings up ZFS storage pools and makes NFS shares available to esxi
Esxi sees and mounts the NFS datastores for VM storage.. and begins to boot the VMs in that pool

individual VMs will use NFS/SMB exports from napp-it for thier data store requirements...

all works really slick and all on one box.. not external filer and no iscsi to worry about... best deal is that since VMs are presented as NFS.. I can also mount that directory as SMB from remote machines and updload / work with the VM containers.. something you cant do with iscsi ..

back to the sun/solaris VM

if you keep it as a VM you can pass whatever resources you want

if you pull it out of the VM container and run it bare metal.. it will see the hardware directly and use disks/controllers naively ..

biggest drawback that I have seen to useing the sun/solaris storage appliance.. cant really do updates without a sun/solaris account.. until they post a new VM

and sun/solaris ZFS while head and shoulders the fastest and most performant.. lacks compatibiliy and features of openZFS.. and particularly OSX features that this dev team build into osxzfs.

the web based GUI however is the best in the industry in my opinion.. although I have not messed with nexentastor recently as their community limits on the free version are so low its useless..
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Re: erratic performance with new drives

Postby e8vww » Mon Mar 05, 2018 3:43 am

macz wrote:here is what I know.. and I am by no means an expert.


Good to know there are so many options. I don't want to give up on o3x because it seems like there is real improvement with each new version. It was really fast for 1 pool, by the time I had all 6 switched over it wasn't so fast. I don't want a separate fileserver if it isn't necessary, would rather just attach drives to the mac pro as I have done for years. Still I need at least 1 "fast" pool so will look into your suggestions. What are you using for a drive chassis?
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby macz » Mon Mar 05, 2018 8:33 am

My ‘old’ server which started as OS X snow leopard with apples original pre release zfs 10a284 code was an istar server 4u chassis with 9 hot swap slots.. a e8400 on a gigabyte board with 8gb ram.. that box is still running great.. although with OS X el cap and osxzfs 1.4.5 I think. In many ways the old snow leopard code was better.. tighter integration with finder and it worked great as backing for OS X server.. but it was explains to me that in 10.7 or 10.8 apple rewrote significant enough portions of the file system that jacked everything up.

Currently my esxi server in in a 3u rackable systems chassis with 12 hot swap slots... with a rackable systems 16 drive shelf attached vi 8088.. that is currently set up with 15 drives in a 3x5raidz1 as a backup pool.. omnios is pretty performant and that pool scrubs at over 900MB/s and that is with old, slow hitachi 2TB drives that avg only 100MBs on avg.

ZFS is a tool.. its not an end all be all. And its not for every use case. And all the talk of pools have to be stripped pairs vs raidz bla bla.. its a toolbox.. and every use case requires something different.

In my case the data doesnt change radically and is replaceable so my bulk online storage is just a stripe.. thats right.. no redundancy.. its checksums so I know if there are issues, but wont self repair.. but I can add drives 1, 2, 5 at a time whatever.. and its FAST. It gets backed up to the stripped raidz1 shelf .. and that obviously has redundancy and self healing.. so for me .. I have little overhead on my online pool that is up 24/7.. get better storage density at lower cost.. and when I get to 5 drives or so.. the pool will get updated with higher capacity drives and the older 5 drives will become another raidz1 stripe in the backup pool...

Look at zfs as just another tool in the bag and forget all the hype.. and very few use cases require log devices and other nonsense ...
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Re: erratic performance with new drives

Postby e8vww » Tue Mar 06, 2018 12:16 pm

macz wrote:Look at zfs as just another tool in the bag and forget all the hype.. and very few use cases require log devices and other nonsense ...


Thanks. I was hoping to get some more mileage out of my mac pro 3,1 (10 years 24/7 use, never failed) but it doesn't support VT-d. This means openzfs won't work well, correct? o3x is 'acceptable' for 1080p streaming but in upgrading to store 4k blu ray I will probably need to go with a dedicated machine.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby Brendon » Tue Mar 06, 2018 3:34 pm

really? I'm sure I stream bd media from my macbook with a zfs raid array...?
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

PreviousNext

Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 9 guests