zfs performance tuning

All your general support questions for OpenZFS on OS X.

zfs performance tuning

Postby aligature » Sun Jun 23, 2019 10:29 am

Hi all. I was hoping to tap into some expertise here to look into performance tuning my zfs array. Lately I've notice that under heavy write load, my system seems to go unresponsive for intervals. I did some tuning with the arc limit and that seemed to help a bit with that issue. I've also noticed that the total write bandwidth seems to be pretty bad, the degree to which surprised me. I've done some basic benchmarking and I wanted to see if this was typical or if there's anything I can do to address it.

Here's some background info. I have a 2x raidz1 array. All six disks are spread across two external USB 3.0 enclosures. I have 16GB of RAM with kstat.zfs.darwin.tunable.zfs_arc_max=8589934592. I used bonnie++ for my benchmarking (bonnie++ -r 16384).

Here are three different test runs for comparison.
Internal SSD:
Sequential Input, Block - 462872 K/sec, 26 %CPU
Sequential Output, Block - 232967 K/sec, 35 %CPU

Single APFS HFS+ drive in the same external enclosure as 1/2 of the ZFS array:
Sequential Input, Block - 130552 K/sec, 7 %CPU
Sequential Output, Block - 127390 K/sec, 16 %CPU

2x raidz1 in external enclosures:
Sequential Input, Block - 224166 K/sec, 25 %CPU
Sequential Output, Block - 5401 K/sec, 0 %CPU

My interpretation of the results: Obviously the internal SSD is very fast. The external single disk (spinning) is ok, 28% of the read speed of the SSD and 55% of the write speed. When you compare the zfs array versus the single external drive the results are interesting. The array read speed is 170% of the single drive, I'm guessing because the reads can be parallelized across more than one drive. The write speed is abysmal though, at 4% of the external drive. Naively, I would expect about 50% since the writes need to be done twice (the second time for parity storage), but 4% seems pathological.

Any experience like this? Any advice to give?

Thanks for any help.
-Andy
aligature
 
Posts: 8
Joined: Mon Oct 16, 2017 6:07 pm

Re: zfs performance tuning

Postby lundman » Mon Jun 24, 2019 4:36 pm

Version of O3X did you test with?
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: zfs performance tuning

Postby aligature » Sat Jun 29, 2019 4:21 pm

Sorry, I missed your reply earlier. My O3X version is "zfs.kext_version: 1.9.0-1".
aligature
 
Posts: 8
Joined: Mon Oct 16, 2017 6:07 pm

Re: zfs performance tuning

Postby lundman » Mon Jul 01, 2019 4:01 pm

We do seem to have a write performance issue that we've been slowly peeking at. There is a 1.9.1 with assembler code in it too, which might increase speed a little.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: zfs performance tuning

Postby aligature » Tue Jul 02, 2019 2:16 pm

Thanks. I’ll give the new version a try and post benchmarks. Is there any helpful debugging or log output I could supply to help pinpoint the bottleneck?
aligature
 
Posts: 8
Joined: Mon Oct 16, 2017 6:07 pm

Re: zfs performance tuning

Postby aligature » Tue Jul 09, 2019 2:19 pm

Here are the new results on 1.9.1-rc1. Basically unchanged aside from some variation in the runs.

Single APFS HFS+ drive in the same external enclosure as 1/2 of the ZFS array:
Sequential Input, Block - 130552 K/sec, 7 %CPU
Sequential Output, Block - 127390 K/sec, 16 %CPU

2x raidz1 in external enclosures:
Sequential Input, Block - 224166 K/sec, 25 %CPU
Sequential Output, Block - 5401 K/sec, 0 %CPU

2x raidz1 in external enclosures (1.9.1-rc1 run1):
Sequential Input, Block - 196724 K/sec, 23 %CPU
Sequential Output, Block - 7871 K/sec, 1 %CPU

2x raidz1 in external enclosures (1.9.1-rc1 run2):
Sequential Input, Block - 210703 K/sec, 24 %CPU
Sequential Output, Block - 4197 K/sec, 0 %CPU
aligature
 
Posts: 8
Joined: Mon Oct 16, 2017 6:07 pm

Re: zfs performance tuning

Postby lundman » Tue Jul 09, 2019 7:00 pm

Ah shame, out of curiosity, which implementations did it pick on your system?
(sysctl kstat | grep _impl)
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: zfs performance tuning

Postby aligature » Thu Jul 11, 2019 3:25 am

Here you go:

sysctl kstat | grep _impl | grep zfs
kstat.zfs.darwin.tunable.zfs_write_implies_delete_child: 1
kstat.zfs.darwin.tunable.zfs_vdev_raidz_impl: [fastest] original scalar sse2 ssse3
kstat.zfs.darwin.tunable.icp_gcm_impl: cycle [fastest] generic pclmulqdq
kstat.zfs.darwin.tunable.icp_aes_impl: cycle [fastest] generic x86_64 aesni
kstat.zfs.darwin.tunable.zfs_fletcher_4_impl: [fastest] scalar superscalar superscalar4 sse2 ssse3
aligature
 
Posts: 8
Joined: Mon Oct 16, 2017 6:07 pm

Re: zfs performance tuning

Postby lundman » Thu Jul 11, 2019 1:54 pm

With default checksum, you should at least get ssse3 with it. The others are only if you encrypt, or use raidz.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: zfs performance tuning

Postby aligature » Sun Jul 14, 2019 10:23 am

@lundman, I'm not sure how to interpret what you said about my implementation results. Are they what you would expect? I have a 2012 Core i7 processor, fyi. I *am* using raid-z on these benchmarks, so that would definitely matter.
aligature
 
Posts: 8
Joined: Mon Oct 16, 2017 6:07 pm

Next

Return to General Help

Who is online

Users browsing this forum: Google [Bot] and 26 guests

cron