ZFS Performance Degredation

Here you can discuss every aspect of OpenZFS on OS X. Note: not for support requests!

ZFS Performance Degredation

Postby rabarar » Thu Sep 08, 2016 4:12 pm

Can someone explain why I see a significant performance degradation when I attempt to copy a 40GB swatch of a filesystem on a non-ZFS SSD filesystem over a Thunder-to-Thunder connection directly onto a newly created ZFS filesystem? It appears that after a certain amount of data is transferred within the processes (i.e. scp), the copy slows down by several orders (three) of magnitude. If I break up the copy by directory (i.e. loop through and copy each entry individually - then It seems to reset - until I encounter a large directory that degrades. Thoughts?

I'm using a a MacPro with three 8T USB 3.0 drives in one raidz pool - I create a filesystem with lz4 compression, and I disable md (via mdutil)
rabarar
 
Posts: 5
Joined: Thu Sep 08, 2016 4:06 pm

Re: ZFS Performance Degredation

Postby haer22 » Sun Sep 11, 2016 9:23 am

Seems like you are filling up buffers and then the copying slows down.

Any log device?
What is the logbias setting?
haer22
 
Posts: 123
Joined: Sun Mar 23, 2014 2:13 am

Re: ZFS Performance Degredation

Postby rabarar » Tue Sep 13, 2016 12:55 pm

It's a pretty vanilla setup - no log device and default loggias set to latency.

I added a SSD 32g (half the physical memory to the system) to see if this will have any performance impact on the system. I'll run a test later this evening and see. Would you expect to see an improvement in performance with the addition of such a sized log device to the pool?
rabarar
 
Posts: 5
Joined: Thu Sep 08, 2016 4:06 pm

Re: ZFS Performance Degredation

Postby Brendon » Tue Sep 13, 2016 1:59 pm

Regarding your performance degredation. There are a couple of angles you can pursue.

1) Limit your arc size to some reasonably small proportion of your memory through configuration see https://openzfsonosx.org/wiki/Performance.

2) If you are getting glitchy peformance another technique that seems to work is to set kstat.zfs.darwin.tuneable.zfs_dirty_data_max to about 128MB i.e. sudo syctl -w kstat.zfs.darwin.tuneable.zfs_dirty_data_max=$((128*1024*1024)).

Try (1) then (2) and let us know how you get on.

Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: ZFS Performance Degredation

Postby rabarar » Wed Sep 14, 2016 11:42 am

So I added a log device and that increased my performance materially, but my question now is what are the optimal settings for
sysctl kstat.spl.misc.spl_misc.os_mem_alloc and kstat.zfs.darwin.tunable.zfs_dirty_data_max on my machine that has 64gig of memory?

I had kstat.zfs.darwin.tunable.zfs_dirty_data_max set at 4294967296, and kstat.spl.misc.spl_misc.os_mem_alloc set at 49617174528.
rabarar
 
Posts: 5
Joined: Thu Sep 08, 2016 4:06 pm

Re: ZFS Performance Degredation

Postby Brendon » Wed Sep 14, 2016 2:47 pm

Hi,

The real problem here as far as I am concerned is that version 1.5.2 of the software has quite poor memory performance, in fact its a definte regression on the prior release. This is being worked on at the moment. If you are able to build the code, the current code in the repo runs better; however - I believe that none of the current publicly available code is suitable for very large memory machines such as yours without explicitly constraining the ARC to be quite small.

As such the recommendation would be to constrain your arc to about 8G. I already gave you 128MB for dirty_data_max.

I have no idea what you are trying to use your machine for - if its a dedicated server and you want to use all RAM for ZFS then we can talk offline, however I'm assuming that you are like most users and this is a desktop machine and you want it to "just work", in which case the recommendation above stands.

Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: ZFS Performance Degredation

Postby rabarar » Wed Sep 14, 2016 5:03 pm

Thanks Brendon,

That is indeed my intention and use at this time. I'm hopeful that future releases will be more aptly tuned for general use and realistic performance expectations for machines with memory that is typical in a desktop machine.

Best,
Rob
rabarar
 
Posts: 5
Joined: Thu Sep 08, 2016 4:06 pm

Re: ZFS Performance Degradation HELP!!! :)

Postby rabarar » Wed Sep 14, 2016 5:40 pm

UPDATE: (Read ORIGINAL POST Below First)
okay - If I ran import as follows, the pool came back online:

sudo zpool import -a -m


ORIGINAL POST:

Okay, I may have done a no-no... when I added the log device it was a file on my SSD drive that I constructed with mkfile. And after rebooting my box, I can't import the pool. It looks like all of the physical disks are there, but it's not recognizing the log device file. Any ideas?

Here's the output from attempting to import the pool "tank":

atomizer:~ robert$ sudo zpool import
pool: tank
id: 14067819714486668234
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://zfsonlinux.org/msg/ZFS-8000-6X
config:

tank UNAVAIL missing device
raidz1-0 ONLINE
media-5173B56C-514C-584F-896C-A628691E3219 ONLINE
media-B726B781-D322-544A-9AE8-41B08A97E28A ONLINE
media-9B538D6E-0592-F243-9057-E70E1DD38F8C ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
atomizer:~ robert$
rabarar
 
Posts: 5
Joined: Thu Sep 08, 2016 4:06 pm

Re: ZFS Performance Degredation

Postby lundman » Wed Sep 14, 2016 8:40 pm

As you found out, use -m to import without log devices :)
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: ZFS Performance Degradation HELP!!! :)

Postby haer22 » Sun Sep 18, 2016 1:59 am

rabarar wrote:Okay, I may have done a no-no... when I added the log device it was a file on my SSD drive that I constructed with mkfile. And after rebooting my box, I can't import the pool. It looks like all of the physical disks are there, but it's not recognizing the log device file. Any ideas?

Yep, it is a no-no. At start-up time, the filesystem where your mkfile resides may or may not be mounted. If not, zfs will not mount without the log-device.

You may use mkfiles for testing and playing around, otherwise partitions (or whole disks) is the way to go.

I have a 48 GB machine and I have the arc_max set to 8 GB due to the memory handling problems.

Also, aside from a fast log device, having a large LARC2 cache is good for performance. On my 64 GB SSD I have two partitions: 8 GB log and 56GB cache.
haer22
 
Posts: 123
Joined: Sun Mar 23, 2014 2:13 am


Return to General Discussions

Who is online

Users browsing this forum: No registered users and 5 guests