erratic performance with new drives

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

Re: erratic performance with new drives

Postby macz » Tue Mar 06, 2018 11:53 pm

VT-D is only going to matter, in a small degree to virtualization.. i.e. passing hardware resources directly to a hypervisor's VM..

so if you are running OS X on your Mac Pro as its operating system (bare metal) and running ZFS on top of that .. vt-d does not come into play and ZFS gets the storage devices at the hardware level..

and as brendon says.. a Mac Pro 3,1 is plenty good for a home filer, media server, etc.
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Re: erratic performance with new drives

Postby e8vww » Wed Mar 07, 2018 9:37 am

Brendon wrote:really? I'm sure I stream bd media from my macbook with a zfs raid array...?


streaming isn't the issue, its the backup/restore time. It scrubbed 3306gb in 10h0m so ~92mb/s during a scrub if no other zfs activity, ~60mb/s read during a backup if no other activity, ~30mb/s read with 5mb/s torrent activity on another drive. Pretty dismal. 1080p blu ray is 25-50gb each 4k are 50-100gb, operations that took a few hours under hfs take a full day under o3x.

Same issue as this person is having: https://github.com/openzfsonosx/zfs/issues/588

macz wrote:VT-D is only going to matter, in a small degree to virtualization.. i.e. passing hardware resources directly to a hypervisor's VM..

so if you are running OS X on your Mac Pro as its operating system (bare metal) and running ZFS on top of that .. vt-d does not come into play and ZFS gets the storage devices at the hardware level..

and as brendon says.. a Mac Pro 3,1 is plenty good for a home filer, media server, etc.


Yes its plenty fast, but 6 o3x pools on one machine doesn't seem to work well. I reread your last post re: RDM, I guess that is enough for the ZFS VM.

Edit: I was thinking to try openzfs on ubuntu first since I won't have to recreate the pools. Or is there an openzfs vm? napp-it is oracle zfs correct? Would I need ubuntu server or just the desktop version? Would appreciate some general steps of how to connect the disks to ubuntu in vmware fusion via rdm and then serve the openzfs volumes back to the host. Thanks all for your help!
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby macz » Thu Mar 08, 2018 2:15 am

strange that your pools are so slow..

I have a socket 775 e8400 server running el cap and o3x 1.4.5 i think .. a 8 drive raidz1 scrubs at well over 300MB/s and those are VERY old seagate 1.5 TB 3gbs drives that have over 28000 hour on them.

a true mac pro should not be so io constrained.. that being said.. you mentioned that the pools were doing other work during scrub? if that is the case, then scrub takes the lowest priority

I could back up that 8drive pool to another 8 drive pool that was connected via an OLD sil image dual esata pcie1x card and it would back up at just at 300mbs which I believe saturates the 1x pcie lane

investigate your pool structures and perhaps your individual drives.. using zpool iostat -v you will see io at the drive level

there are some dtrace scripts around that will work on OS X but I don't have them

as for other pool and other openzfs operating system.

if you are a power linux user you can go that route.. if you don't know linux i would not go that route first.

if you really want your pools to stand a fighting chance at interoperability then be very cautious on upgrading the pools and activating feature flags.. the openzfs site use to have a pretty up to date table on what flags work with each OS

personally I would try napp-it either on

openindiana bare metal if you want a gui OS and the nappp-it web based zfs 'gui'

or

omniosCE .. ominos is a server os so no gui but napp-it has a web gui front end that will aid in admin and setup of the system as well as admin the ZFS file system. if you buy a home license it has some other slick features like remote replication etc.

both of these are based on what was open solaris and use openzfs code so they are pretty compatible and i have moved pools about.

napp-it on omnios is probably the most performant after solaris proper. but solaris would not have pool compatibility.

solaris 11.4 just came out with a public beta that added some nice features and then there is running the sun storage appliance simulator OS in a virtual environment like esxi or virtual box..

really depends on what you are trying to accomplish.

if you just want a home network filer, than napp-it free will do it for you,

if you need the pools for dedicated attached storage to a mac.. and need some of the compatibility that the fake hfs provides.. just stick with o3x I am sure they will get the performance bugs worked out..
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Re: erratic performance with new drives

Postby e8vww » Sat Mar 10, 2018 7:50 pm

macz wrote:strange that your pools are so slow..

8 drive raidz1 scrubs at well over 300MB/s and those are VERY old seagate 1.5 TB 3gbs drives that have over 28000 hour on them.



Just to restate I have many small pools as opposed to 1 big one.

Six pools of two drives each:

8tb mirror
8tb mirror
8tb mirror
3tb mirror
3tb mirror
4tb mirror

From my last post I see ~92mb/s during a scrub if no other zfs activity, ~90mb/s read during a backup if no other activity, ~30mb/s read with 5mb/s torrent activity on another pool. If I am getting these numbers with o3x is it reasonable to assume they would be any faster with your other mentioned solutions? The bare drives were delivering 150-180mb/s under hfs. One would assume that means a pair could theoretically deliver 300-360mb/s read.

When I had 2-3 pools they seemed to be faster but now with all 6 going they seem quite slow. Does anyone know if its os x that is generating regular 7k writes? I keep seeing these little 'flickers' even though spotlight is turned off for those volumes.

With a virtual filer on vmware fusion, would I select host-only networking?
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby macz » Sun Mar 11, 2018 8:46 am

there might in lie your problem.. partly

your pool layout is really not in line with ZFS design practices

really ZFS is designed as a pool per filer, unless there is a need for a pool of a different layout for a specific purpose.. it 1 pool fast like mirrors for VMs and another say raidz for big data..

having multiple pools of the same design buys you nothing but overhead.. i.e. .. having multiple 2 drive mirrors.. you should just stripe those mirrors into one pool and control your data with datasets..

you have a lot of overhead there
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Re: erratic performance with new drives

Postby e8vww » Thu Mar 29, 2018 6:34 pm

macz wrote:there might in lie your problem.. partly

your pool layout is really not in line with ZFS design practices

really ZFS is designed as a pool per filer, unless there is a need for a pool of a different layout for a specific purpose.. it 1 pool fast like mirrors for VMs and another say raidz for big data..

having multiple pools of the same design buys you nothing but overhead.. i.e. .. having multiple 2 drive mirrors.. you should just stripe those mirrors into one pool and control your data with datasets..

you have a lot of overhead there


I did more testing and it seems like most of the slow writes were of fragmented files. A torrent pool was at 60% fragmentation after 2 months use. I dont want to use striped mirrors because I want to be able to create and destroy smaller pools to defragment from time to time. Also I don't want to lock into a specific replacement drive size since the new models are always getting bigger.

Edit: I see the other threads on performance. Seems the 50% drop is inline with other people's experiences. One would assume six pools is not too much overhead for a mac pro that it would cause a 50% drop in throughput.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Previous

Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 14 guests