erratic performance with new drives

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

erratic performance with new drives

Postby e8vww » Fri Dec 15, 2017 11:52 pm

I just created a two disk mirror with brand new 8tb enterprise drives connected by sata. Here are a few screenshots from istat menus:

https://imgur.com/a/151sZ

Image
Copying a 19GB MKV test file from a sata hfs drive to zfs mirror, the performance is constantly up and down.

Image
Versus the same file copied to another sata hfs drive where the write speed remains nearly constant.


I created the pool using this command:

Code: Select all
sudo zpool create -f -o ashift=12 -O casesensitivity=insensitive -O normalization=formD $poolname mirror diskX diskY


Image
Available RAM does not change during the copy.

Image
CPU use is low

Image
Copied the same 19GB file a 2nd time with much better results-- still with periodic drops to zero.

The finder is hanging. Any thoughts on what is causing this? The symptoms seem to fit with it being an alignment problem. As mentioned above I used the recommended ashift=12. The drives are advanced format 512e.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby lundman » Mon Dec 18, 2017 12:31 am

There was some throttle code we recently fixed in master, but has not been pushed out yet - I'm hoping that it will improve what you see here.
User avatar
lundman
 
Posts: 601
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: erratic performance with new drives

Postby e8vww » Fri Dec 22, 2017 8:23 pm

lundman wrote:There was some throttle code we recently fixed in master, but has not been pushed out yet - I'm hoping that it will improve what you see here.

https://imgur.com/4IBeXIY
Image

Understood. I created a second zpool using 2x 4TB green drives, and a test zfs to zfs copy of video data is faster and more consistent. Still, there are momentary (<1 sec) drops to 0 kb/s. Is that a characteristic of zfs copying large blocks of video data?
Edit: Copying small files is much slower.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby macz » Sun Feb 18, 2018 6:40 pm

I dont think that zfs behaves properly with drives that have large caches..

my pools on much older drives with 32 chaches aka like seagate st1.5 gb drives are much faster than newer 8tb drives with 128 cache.. and as raw disks.. the 8tb writes way faster than the old seagates..

further..

I just added a new 8tb 256 cache drive to and existing 2 8tb 128 cache pool.. so 3 disk stripe...

doing writes.. you would expect the new drive to be doing more writes if anything since the previous 2 were at nearly 60% capacity .. zfs should be 'leveling' the writes..

but the oppoiste.. the 2 existing but smaller cached reds were writting with greater throughput and iops than the new 256k cached drive.. all 3 are RED variants .. 2 ezzx and a newer ezfx I think...

kinda odd...
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Re: erratic performance with new drives

Postby e8vww » Tue Feb 20, 2018 7:34 am

macz wrote:I dont think that zfs behaves properly with drives that have large caches..
my pools on much older drives with 32 chaches aka like seagate st1.5 gb drives are much faster than newer 8tb drives with 128 cache..


I'm seeing a similar effect. I noticed the mirror of worn out 3tb drives is getting the same speeds as a new 8tb mirror.

Maybe we need a new thread to discuss performance issues but I'll leave that up to the devs. Here are a few screenshot examples of zfs performance issues I have encountered. Note in each case I used large video files to make sure it wasn't an issue of many small files slowing the transfer. No other major disk operations took place during the test transfers. I have arc_max set to 1gb (can I make it 0?)

Image
speed drop after 1h - copying 750gb from one zfs mirror to another - 1h scale

Image
same as above - 24h scale

Image
copying between mirrors no.1 & 2 produces speed drop internet -> mirror no. 3

Image
Mac app mkvtoolnix - v 20 - poor performance remuxing mkv files from one zfs mirror to another, frequent pausing
Last edited by e8vww on Thu Feb 22, 2018 3:28 am, edited 1 time in total.
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby macz » Tue Feb 20, 2018 9:27 pm

another thing that I have noticed trend wise.. here in other threads.. people doing testing taking arc down to much smaller levels than defualt speed up write behaviour...

way back when ZFS was young.. it worked great on commodity drives and didnt seem all that ram relient ... of course not dedupe which didnt even exist in the beginning..

I am thinking that over time as ZFS matured and became more enterprise accepted.. it got more tuned to living in massive enterprise filer boxes with huge ram, enterprise drives, and no other workload for the kenrel except to run the zfs filer...

I just dont think that the zfs ports are as tuned and effecient for us hobby types anymore..
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Re: erratic performance with new drives

Postby e8vww » Wed Feb 21, 2018 1:00 am

macz wrote:I just dont think that the zfs ports are as tuned and effecient for us hobby types anymore..


Interesting. I am planning to set up a new server just for files...what os is openzfs native to if any? which os is the fastest and most reliable?
e8vww
 
Posts: 51
Joined: Fri Nov 24, 2017 2:06 pm

Re: erratic performance with new drives

Postby lundman » Tue Feb 27, 2018 3:45 am

IllumOS is the most native OS to ZFS. Solaris 11 if you want to go with OracleZFS, or something like OmniOS for OpenZFS. It is very stable, no OS is more stable :) But can lag behind in hardware support, compared to something like Linux, that supports the most exotic of hardware setups. FreeBSD is a decent compromise between the two.
User avatar
lundman
 
Posts: 601
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: erratic performance with new drives

Postby lundman » Tue Feb 27, 2018 3:49 am

I do appreciate the effort put into trying to figure out the bottleneck. I have not yet reached the milestone to look at speed improvements and general optimisations. But perhaps it is ported enough now, that we could.

One thing that would be interesting to see, once it is slower than expected, is to get a flamegraph, so we can see what it is spending the most time doing.

This usually entails running:
Code: Select all
sudo dtrace -x stackframes=100 -n 'profile-997 /arg0/ {
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks


when the slow down is happening, it will run for 60s and produce a hefty text file. This can then be changed to a svg flamegraph, which I can do if I have access to the text file.
User avatar
lundman
 
Posts: 601
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: erratic performance with new drives

Postby macz » Tue Feb 27, 2018 5:27 am

@lundman

I would love to see half of the anylitics that fishworks/solaris have created for their storage os.. lots of eye candy.

I had come across several github and other websites of some dtrace nijas that had some cool scripts..



for a really fast, reliable ZFS .. look no further than solaris... their ZFS is still the fastest.. but the pools are not compatibe with openzfs so you are locked into solaris.

some have run the sun/solaris storage appliance simulator VM which is full featured and without a time limit.. some have even ported it to bare metal and its performant.

new solaris 11.4 supports SMB3 which is much quicker for OSX vs the 2.1 in omnios, openindiana and the like.

this project with its emulated hfs and other optimizations may in the end prove best in a mac environment but suffers many performance setbacks and since osx is a gui environment has far greater overhead for a storage only box over a ubuntu server or omnios server install say with napp-it.. which is a gui front end for zfs

overall.. the sun storage appliance is by far the best gui front end and very usable for a filer box .. even emulated on esxi or virtualbox.. you can even pass through controllers...

omnios with zfs even free version is pretty good gui, pretty performant, more stable that osx zfs .. but lacks SMB3 and other osx integration...

so in the end you just have to choose.. hehe
macz
 
Posts: 53
Joined: Wed Feb 03, 2016 4:54 am

Next

Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 1 guest

cron